Arch, because I use niche software and the AUR doesn’t always get along with Manjaro very well (ungoogled-chromium-bin is the worst offender). Switched to arch, configured it identically to my manjaro install, and all has been well.
Arch, because I use niche software and the AUR doesn’t always get along with Manjaro very well (ungoogled-chromium-bin is the worst offender). Switched to arch, configured it identically to my manjaro install, and all has been well.
Firefox (well, librewolf, but forks are a matter of personal preference).
Chrome (Ungoogled chromium) is used as a fallback for the occasional site that doesn’t work with my restrictive FF configuration.
Both have uBlock, though they’re configured differently to suit their individual purposes.
That looks like one of those multi-color pens. :)
This is literally the first post I saw when opening the app. I guess I’ll do something else.
Quick search to verify…
So this is how I learn. Wouldn’t have it any other way.
berry :)
You need an absolutely insane amount of data to train LLMs. Hundreds of billions to tens of trillions of tokens. (A token isn’t the same as a word, but with numbers this massive it doesn’t even matter for the point.)
Wikipedia just doesn’t have enough data to make an LLM off of, and even if you could do it and get okay results, it’ll only know how to write text in the style of Wikipedia. While it might be able to tell you all about the how different cultures most commonly cook eggs, I doubt you’ll get any recipe out of it that makes sense.
If you were to take some base model (such as llama or gpt) and tune it in Wikipedia data, you’ll probably get a “llama in the style of Wikipedia” result, and that may be what you want, but more likely not.
Gamer Gun!
You can tell git to use a specific key for each repo. I have the same situation as you and this is how I handle it.
https://superuser.com/questions/232373/how-to-tell-git-which-private-key-to-use
TLDR pls.