You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    382
    arrow-down
    44
    ·
    1 month ago

    They keep saying it’s impossible, when the truth is it’s just expensive.

    That’s why they wont do it.

    You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

    Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      165
      arrow-down
      13
      ·
      1 month ago

      No he’s right that it’s unsolved. Humans aren’t great at reliably knowing truth from fiction too. If you’ve ever been in a highly active comment section you’ll notice certain “hallucinations” developing, usually because someone came along and sounded confident and everyone just believed them.

      We don’t even know how to get full people to do this, so how does a fancy markov chain do it? It can’t. I don’t think you solve this problem without AGI, and that’s something AI evangelists don’t want to think about because then the conversation changes significantly. They’re in this for the hype bubble, not the ethical implications.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        86
        arrow-down
        10
        ·
        1 month ago

        We do know. It’s called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

        • Excrubulent@slrpnk.net
          link
          fedilink
          English
          arrow-up
          59
          arrow-down
          1
          ·
          1 month ago

          “Edging bets” sounds like a fun game, but I think you mean “hedging bets”, in which case you’re admitting we can’t actually do this reliably with people.

          And we certainly can’t do that with an LLM, which doesn’t actually think.

          • explore_broaden@midwest.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 month ago

            I think that’s more a function of the fact that it’s difficult to verify that every one of the over 1M college graduates each year isn’t a “moron” (someone very bad about believing things other people made up). I think it would be possible to ensure a person has these critical thinking skills with a concerted effort.

          • dustyData@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            1 month ago

            Choose a lane, this comment directly contradicts you previous comment. I think you are just trolling and being an idiot with corrections to elicit reactions.

        • RidcullyTheBrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          1 month ago

          What does this have to do with AI and with what OP said? Their point was obviously about limitations of the software, not some lament about critical thinking

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Humans aren’t great at reliably knowing truth from fiction too

        You’re exactly right. There is a similar debate about automated cars. A lot of people want them off the roads until they are perfect, when the bar should be “until they are safer than humans,” and human drivers are fucking awful.

        Perhaps for AI the standard should be “more reliable than social media for finding answers” and we all know social media is fucking awful.

        • Excrubulent@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 month ago

          The problem with these hallucinated answers that makes them such a sensational story is that they are obviously wrong to virtually anyone. Your uncle on facebook who thinks the earth is flat immediately knows not to put glue on pizza. It’s obvious. The same way It’s obvious when hands are wrong in an image or someone’s hair is also the background foliage. We know why that’s wrong; the machine can’t know anything.

          Similarly, as “bad” as human drivers are we don’t get flummoxed because you put a traffic cone on the hood, and we don’t just drive into tue sides of trucks because they have sky blue liveries. We don’t just plow through pedestrians because we decided the person that is clearly standing there just didn’t matter. Or at least, that’s a distinct aberration.

          Driving is a constant stream of judgement calls, and humans can make those calls because they understand that a human is more important than a traffic cone. An autonomous system cannot understand that distinction. This kind of problem crops up all the time, and it’s why there is currently no such thing as an unsupervised autonomous vehicle system. Even Waymo is just doing a trick with remote supervision.

          Despite the promises of “lower rates of crashes”, we haven’t actually seen that happen, and there’s no indication that they’re really getting better.

          Sorry but if your takeaway from the idea that even humans aren’t great at this task is that AI is getting close then I think you need to re-read some of the batshit insane things it’s saying. It is on an entirely different level of wrong.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      61
      arrow-down
      6
      ·
      1 month ago

      I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you’ve scientifically ever seen. So while it absolutely makes sense to say, let’s just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        4
        ·
        1 month ago

        The issue is, it is much harder to figure out its bullshit.

        Google AI suggested you put glue on your pizza because a troll said it on Reddit once…

        Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

        You can’t throw a toddler in a library and expect them to come out knowing everything in all the books.

        AI needs that guided teaching too.

      • callouscomic@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        10
        ·
        1 month ago

        “Most published journal articles are horseshit, so I guess we should be okay with this too.”

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          No, it’s simply contradicting the claim that it is possible.

          We literally don’t know how to fix it. We can put on bandaids, like training on “better” data and fine-tune it to say “I don’t know” half the time. But the fundamental problem is simply not solved yet.

    • Zarxrax@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      1
      ·
      1 month ago

      I’m addition to the other comment, I’ll add that just because you train the AI on good and correct sources of information, it still doesn’t necessarily mean that it will give you a correct answer all the time. It’s more likely, but not ensured.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 month ago

      it’s just expensive

      I’m a mathematician who’s been following this stuff for about a decade or more. It’s not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won’t stop it from hallucinating.

      The real answer is that they shouldn’t be trying to answer questions using an LLM, especially because they had a decent algorithm already.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      3
      ·
      1 month ago

      no, the truth is it’s impossible even then. If the result involves randomness at its most fundamental level, then it’s not reliable whatever you do.

    • jeeva@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 month ago

      That’s just not how LLMs work, bud. It doesn’t have understanding to improve, it just munges the most likely word next in line. It, as a technology, won’t advance past that level of accuracy until it’s a completely different approach.

    • redfellow@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      1 month ago

      The truth is, this is the perfect type of a comment that makes an LLM hallucinate. Sounds right, very confident, but completely full of bullshit. You can’t just throw money on every problem and get it solved fast. This is an inheret flaw that can only be solved by something else than a LLM and prompt voodoo.

      They will always spout nonsense. No way around it, for now. A probabilistic neural network has zero, will always have zero, and cannot have anything but zero concept of fact - only stastisically probable result for a given prompt.

      It’s a politician.

    • Canary9341@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 month ago

      They could also perform some additional iterations with other models on the result to verify it, or even to enrich it; but we come back to the issue of costs.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Also once you start to get AI that reflects on its own information for truthfulness, where does that lead? Ultimately to determine truth you need to engage with the meaning of the words, and the process inherently involves a process of self-awareness. I would say you’re talking about treaching the AI to understand context, and there is no predefined limit to the layers of context needed to understand the truthfulness of even basic concepts.

        An AI that is aware of its own behaviour and is able to explore context as far as required to answer questions about truth, which would need that exploration precached in some sort of memory to reduce the overhead of doing this from first principles every time? I think you’re talking about a mind; a person.

        I think this might be a fundamental barrier, which I would call the “context barrier”.

        • snooggums@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Also once you start to get AI that reflects on its own information for truthfulness, where does that lead?

          A new religion

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I think you’re right that with sufficient curation and highly structured monitoring and feedback, these problems could be much improved.

      I just think that to prepare an AI, in such a way, to answer any question reliably and usefully would require more human resources than there are elementary particles in the universe. We would be better off connecting live college educated human operators to Google search to individually assist people.

      So I don’t know how helpful it is to say “it’s just expensive” when the entire point of AI is to be lower cost than a battalion of humans.

    • thefactremains@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 month ago

      Why not solve it before training the AI?

      Simply make it clear that this tech is experimental, then provide sources and context with every result. People can make their own assessment.

  • TacticsConsort@yiffit.net
    link
    fedilink
    English
    arrow-up
    222
    arrow-down
    1
    ·
    1 month ago

    In the interest of transparency, I don’t know if this guy is telling the truth, but it feels very plausible.

  • Hubi@lemmy.world
    link
    fedilink
    English
    arrow-up
    146
    arrow-down
    2
    ·
    1 month ago

    The solution to the problem is to just pull the plug on the AI search bullshit until it is actually helpful.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    133
    arrow-down
    1
    ·
    1 month ago

    Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

  • Resol van Lemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    2
    ·
    1 month ago

    If you can’t fix it, then get rid of it, and don’t bring it back until we reach a time when it’s good enough to not cause egregious problems (which is never, so basically don’t ever think about using your silly Gemini thing in your products ever again)

    • Xanis@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 month ago

      Corps hate looking bad. Especially to shareholders. The thing is, and perhaps it doesn’t matter, most of us actually respect the step back more than we do the silly business decisions for that quarterly .5% increase in a single dot on a graph. Of course, that respect doesn’t really stop many of us from using services. Hell, I don’t like Amazon but I’ll say this: I still end up there when I need something, even if I try to not end up there in the first place. Though I do try to go to the website of the store instead of using Amazon when I can.

      • Resol van Lemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Sarcasm aside, that 1% can feed a family in a developing country, and they have 100 times that.

        The corporate greed is absolutely insane.

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    3
    ·
    1 month ago

    Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

    Misinformation is literally the first line of defense for them.

  • Sentient Loom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    1
    ·
    1 month ago

    Here’s a solution: don’t make AI provide the results. Let humans answer each other’s questions like in the good old days.

  • SuddenDownpour@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    3
    ·
    1 month ago

    Has No Solution for Its AI Providing Wildly Incorrect Information

    Don’t use it???

    AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there’s enough flat out wrong stuff in its data there’s a growing chance of it screwing it up.

  • GenosseFlosse@lemmy.nz
    link
    fedilink
    English
    arrow-up
    74
    ·
    1 month ago

    Wow, in the 2000’s and 2010’s google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place. In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.

    Now they just blindly riding the AI hype train, because “everyone else is doing AI”.

    • bitwaba@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 month ago

      I’m agreeing with most of what you said, but Google has been working on AI for a long time. Google’s purchased DeepMind in 2014 and kept it as a separate subsidiary, and started their own AI division inside Google itself in 2017.

      They also developed a machine learning processor called the TPU, which has been used in their data centers since 2015.

      So to Google, AI really means All In. Which is particularly concerning since they don’t even have the best performing AI after a decade of research with a bottomless pit of money.

      • GenosseFlosse@lemmy.nz
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        That are some good points, i didnt really hear about deepmind for a long time and forgot about it. But replacing google websearch with “AI” really sounds like a decision made by marketing department, where they dont understand their own product, their customers or the techs limitations.

        Unless of course they want to remove/hide all outgoing links from google search, so the user will spend more time there and google has more opportunities to show them ads from their own ad network, instead of losing the visitors to another website…

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      Just focusing on one product for this discussion: search

      One of the big problems, is because search chooses winners, it’s naturally a competition. Which means even if Google wanted to stay good, the ecosystem would change and adapt to them. They’re always on a treadmill.

      Google was magical, before everyone was competing on the metrics Google used. Once they gamified it, The ecosystem fundamentally changed.

      I’m not apologizing for Google, they’re intrinsically incentivized to behave badly, and the fact that they kill lots of good products is a sign of their myopic culture. I just want to indicate no ecosystem stay static when winners and losers exist.

  • Paradox@lemdro.id
    link
    fedilink
    English
    arrow-up
    67
    ·
    1 month ago

    Replace the CEO with an AI. They’re both good at lying and telling people what they want to hear, until they get caught

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    1 month ago

    “It’s broken in horrible, dangerous ways, and we’re gonna keep doing it. Fuck you.”

  • joe_archer@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    2
    ·
    1 month ago

    It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      edit-2
      1 month ago

      Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they’re just not infallible. Just like you’d check a Wikipedia source if it seemed suspect, you shouldn’t trust LLM outputs uncritically. /shrug

      • blind3rdeye@lemm.ee
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        1 month ago

        Google providing links to dubious websites is not the same as google directly providing dubious answers to questions.

        Google is generally considered to be a trusted company. If you do a search for some topic, and google spits out a bunch of links, you can generally trust that those links are going to be somehow related to your search - but the information you find there may or may not be reliable. The information is coming from the external website, which often is some unknown untrusted source - so even though google is trusted, we know that the external information we found might not be. The new situation now is that google is directly providing bad information itself. It isn’t linking us to some unknown untrusted source but rather the supposedly trustworthy google themselves are telling us answers to our questions.

        None of this would be a problem if people just didn’t consider google to be trustworthy in the first place.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          1 month ago

          I do think Perplexity does a better job. Since it cites sources in its generated response, you can easily check its answer. As to the general public trusting Google, the company’s fall from grace began in 2017, when the EU fined them like 2 billion for fixing search results. There’ve been a steady stream of controversies since then, including the revelation that Chrome continues to track you in private mode. YouTube’s predatory practices are relatively well-known. I guess I’m saying that if this is what finally makes people give up on them, no skin off my back. But I’m disappointed by how much their mismanagement seems to be adding to the pile of negativity surrounding AI.

  • namingthingsiseasy@programming.dev
    link
    fedilink
    English
    arrow-up
    61
    ·
    1 month ago

    The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they’re just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it’s pretty hilarious watching them flail.

    You can really see that consulting background of his doing its work. It’s actually kinda poetic because now he’ll get a chance to see what actually happens to companies that do business with McKinsey.

    • cheesepotatoes@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      edit-2
      1 month ago

      Let’s be realistic here, google still pays out fat salaries. That would be more than enough incentive for me. I’d take the job and ride the wave until the inevitable lay offs.

      That being said, it seems like it’s only downhill from here (arguable a few years ago). Reminds me of IBM at this point.

      • namingthingsiseasy@programming.dev
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 month ago

        Your comment explains exactly what happens when post-expiration companies like Google try to innovate:

        Let’s be realistic here, google still pays out fat salaries. That would be more than enough incentive for me. I’d take the job and ride the wave until the inevitable lay offs.

        This is why it takes a lot more than fat salaries to bring a project to life. Google’s culture of innovation has been thoroughly gutted, and if they try to throw money at the problem, they’ll just attract people who are exactly like what you described: money chasers with no real product dreams.

        The people who built Google actually cared about their products. They were real, true technologists who were legitimately trying to actually build something. Over time, the company became infested with incentive chasers, as exhibited by how broken their promotion ladder was for ages, and yet nothing was done about it. And with the terrible years Google has had post-COVID, all the people who really wanted to build a real company are gone. They can throw all the money they want at the problem, but chances are slim that they’ll actually be able to attract, nurture and retain the real talent that’s needed to build something real like this.

        • cheesepotatoes@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          Your comment explains exactly what happens when post-expiration companies like Google try to innovate

          Hah well, in my defence I did do the sweaty five guys in a basement startup thing in my 20’s. Nowadays I’m more concerned with paying the mortgage and keeping my newborn alive.

          I agree with everything you’re saying. Google hasn’t been the plucky, disruptive underdog in the arena for at least a decade. They’re corporate and bloated. All that matters to Google, and more broadly any incumbent and large corporation, is to inflate the stock price. The products don’t matter, the tech doesn’t matter, all that matters is make stock # go up. The stock is Google’s product.

          Unfortunately in the current economic environment, capital is more expensive now and the incumbent heavy weights are doing everything they can to build regulatory moats around their cash cows. I don’t think we’ll see any competing startups with real tech and engineering innovations for some time.

      • Semi-Hemi-Lemmygod@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        1 month ago

        If they backed a dump truck full of money up to my house I’d go work for them just like you. But I’d also be riding it out until the eventual layoff. What neither of us would be doing is putting in a decent amount of effort or building something cool.

        Even if I wanted to work on something cool I know Google would likely release it, not maintain it, and then kill it in a few short years. So even if I was paid a ludicrous salary I wouldn’t do more than was needed, let alone build something that would drive shareholder value.

  • badbytes@lemmy.world
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    1
    ·
    1 month ago

    Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps