• dude@lemmings.worldOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      The gap between the models that you can run locally and those actually large language models is huge though

      • mindbleach@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Narrowing every year.

        The high end for video is still going nuts, but the high end for LLMs seems to be petering out.

        • dude@lemmings.worldOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          I would love to run some LLMs on my laptop but I am not aware of any that would run on it and could, let’s say, summarize long news articles that I read accurately. The gap is still huge, maybe a bit smaller if you have some GPUs with a lot of VRAM or run a data center to run SOTA open-source models like Deepseek