After doing some google-fu, I’ve been puzzled further as to how the finnish man has done it.
What I mean is, Linux is widely known and praised for being more efficient and lighter on resources than the greasy obese N.T. slog that is Windows 10/11
To the big brained ones out there, was this because the Linux Kernel more “stripped down” than a Windows bases kernel? Removing bits of bloated code that could affect speed and operations?
I’m no OS expert or comp sci graduate, but I’m guessing it has a better handle of processes, the CPU tasks it gets given and “more refined programming” under the hood?
If I remember rightly, Linux was more a server/enterprise OS first than before shipping with desktop approaches hence it’s used in a lot of institutions and educational sectors due to it being efficient as a server OS.
Hell, despite GNOME and Ubuntu getting flak for being chubby RAM hog bois, they’re still snappier than Windows 11.
MacOS? I mean, it’s snappy because it’s a descendant of UNIX which sorta bled to Linux.
Maybe that’s why? All of the snappiness and concepts were taken out of the UNIX playbook in designing a kernel and OS that isn’t a fat RAM hog that gobbles your system resources the minute you wake it up.
I apologise in advance for any possible techno gibberish but I would really like to know the “Linux is faster than a speeding bullet” phenomenon.
Cheers!
As I’m sure you’ve gathered, this is a complex and nuanced discussion, but to me the biggest factors making Linux fast are:
There’s something to be said where a consistent 5% performance improvement in a filesystem or process scheduler would be taken as a huge win. How would you even manage finding or contributing such a change to something closed source like Windows 11? Academics write papers about the kernel’s performance and how it can be improved whereas I tend to think Microsoft takes more of a ‘good enough’ approach to such details.