Surprised pikachu face

  • utopiah@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 days ago

    I like Ollama, and recommend it to tinker, but I admit this “LLM Explorer” is quite neat thanks to sections like “LLMs Fit 16GB VRAM”

    Ollama just works but it doesn’t help to pick which model best fits your needs.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      pick which model best fits your needs.

      What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.

      • morriscox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. “If you have nothing to hide…” is bullshit and needs to die.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        I don’t think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response

          use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.