I would love to run some LLMs on my laptop but I am not aware of any that would run on it and could, let’s say, summarize long news articles that I read accurately. The gap is still huge, maybe a bit smaller if you have some GPUs with a lot of VRAM or run a data center to run SOTA open-source models like Deepseek
Whatever you’re doing, do it locally.
The cloud is someone else’s computer.
The gap between the models that you can run locally and those actually large language models is huge though
Narrowing every year.
The high end for video is still going nuts, but the high end for LLMs seems to be petering out.
I would love to run some LLMs on my laptop but I am not aware of any that would run on it and could, let’s say, summarize long news articles that I read accurately. The gap is still huge, maybe a bit smaller if you have some GPUs with a lot of VRAM or run a data center to run SOTA open-source models like Deepseek