• 0 Posts
  • 29 Comments
Joined 6 months ago
cake
Cake day: September 9th, 2025

help-circle

  • On the topic of daylight savings, I used to prefer that we stay on the daylight savings side of the time. But honestly at this point I am fine with staying on standard time if that means no more switching.

    Otherwise one thing lately that I wish was done and over with by now is physical junk mail. Literal paper showing up in my mailbox that I now have to dispose of. Something I don’t ask for and will never look at. And I can’t help but think that happens to millions in my country every single day all for an irrelevant number of people to even look at. I can’t imagine how many trees are lost each year for something that has zero usefulness.






  • We don’t necessarily need to know how animal brains work to achieve AGI, and it doesn’t necessarily have to work anything like animal brains.

    100% agree. Definitely thinking inside the box, inside the brain, when I went down that path.

    I think better way to explain my thinking is that LLMs can not operate like a human brain in that they fundamentally lack almost all qualities of a human brain. They are good but not perfect at logic just like humans, but they completely lack creativity, intuition, imagination, emotion and common sense, qualities that would make AGI.

    Without humans being able to understand how our brains process those qualities, it will be very hard to achieve AGI. But again, very wrong of me to think we need to translate code from our brains to achieve AGI.




  • From reading all the comments from the community, it’s amazing (yet not surprising) that all these managers have fallen for the marketing of all these LLMs. These LLMs have gotten people from all levels of society to just accept the marketing without ever considering the actual results for their use cases. It’s almost like the sycophant nature of all LLMs has completely blinded people from being rational just because it is shiny and it spoke to them in a way no one has in years.

    On the surface level, LLMs are cool no doubt, they do have some uses. But past that everyone needs to accept their limitations. LLMs by nature can not operate the same as a human brain. AGI is such a long shot because of this and it’s a scam that LLMs are being marketed as AGI. How can we attempt to recreate the human brain into AGI when we are not close to mapping out how our brains work in a way to translate that into code, let alone other more simple brains in the animal kingdom.






  • As others have said, both washers and dryers let clothes tumble around uncontrollably for hours at a time which can do varying levels of damage.

    My best unscientific advice from doing both a lot is to keep load sizes small. Reduces the friction and theoretically reduces the drying time, which also reduces damage. Also, if drying tshirts with ink prints, flip them inside out to protect the ink.