• 10 Posts
  • 392 Comments
Joined 7 months ago
cake
Cake day: August 27th, 2025

help-circle
  • Yes. It’s gotten worse with age too. Just another wonderful “perk” of aging hardware running ASD.exe

    The big thing that freaked me out / made me worry about early onset dementia was the dropping of words when speaking or typing, forgetting names and reduction in motor coordination.

    Having studied martial arts for 30+yrs, it was quite the mind trip when I suddenly started getting confused about left vs right … but could still do some incredibly intricate manouvers.

    Apparently declining androgens worsen ASD. I have some research bookmarked that confirms all of these symptoms.

    TL;DR: yeah - me too.



  • Yes. And despite that, one in three Australian homes now has rooftop solar.

    Renewables supplied over half the national grid in Q4 2025, with roughly 7 GW of new capacity added that year alone. Nearly 200,000 home batteries were installed in the second half of 2025.

    One in three new vehicles sold now has some form of electrification, with hybrids leading the shift and petrol sales dropping 10% last year.

    Even heavy industry is moving. Australia already operates the world’s largest fully driverless freight rail network - Rio Tinto’s AutoHaul runs 1,700km of heavy-haul trains across the Pilbara, controlled remotely from Perth, straight from the mine to the deep-water port at Cape Lambert.

    Battery-electric locomotives are now in trial on those same lines. Electrification is happening at every scale here - rooftop, road, and rail - often despite the politics, not because of it.














  • Yep. Last I looked, they used both GAFAM and their own infra (teclis)? I think the goal is to eventually move solely to their own infra / web indexing.

    Tbh, I dunno how much longer “search” is going to be a unique category. I think we’re probably going to need to move to personal AI fetch tools, as grim as that sounds, that can filter out shit news sources using trusted domains, white lists / black lists, as blockers etc. Think like ublock but for search.

    I think that’s how lots of people use ChatGPT tbh; I’m not a fan of that. I’d favour a more local / self hosted AI agent. Something like Perplexica?

    https://github.com/kiranz/perplexica

    Actually, fuck it: maybe I’ll build that myself.

    The surface web is cooked / enshittified almost beyond use and we might need to fight fire with fire.


  • SuspciousCarrot78@lemmy.worldtoDeGoogle Yourself@lemmy.mlAged like milk
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    5 days ago

    I hear you - a one off purchase (or a $50-100 for 5yrs) would be a selling point. Hell, I’d even buy credits like I do for USENET.

    https://stephango.com/quality-software

    Just let me pay in one lump sum. Not a fan of rolling subscriptions; never end up using the whole quota, so unless there’s a rollover it gets wasted.

    Self hosted SearXNG is an option but it’s going to be pulling from Bing, Google etc, so the result quality ceiling is capped by those engines. Kagi is trying to be a better search engine overall and not just a private wrapper around existing ones, IIUC.

    Personally, I find myself not really searching much any more. I sort of know which sites I need and go there directly. Anything low value goes thru ddg-lite or (gasp) my LLM.

    EDIT: Huh…my LLM just told me to sit down

    “Kagi supports PayPal and OpenNode (Bitcoin) as alternative payment methods. Crucially, these do not create a subscription. They top off your account with credits, which then fund your Kagi membership. That’s essentially the lump-sum/credits model you described”

    Well then…I sit corrected.



  • FWIW Extra shit I cooked last night. It’s live now, so deserves a PS: of its own

    PPS: I inbuilt as spam blocker as well.

    • allow-list / deny-list domain filters
    • DDG-lite junk-domain blocklist
    • ad/tracker URL rejection
    • relevance gate before any provenance upgrade

    Enjoy :) Blurb below

    “But what if it just… Googled it?”

    We can do that. But better.

    You: Who won best picture at the 97th Academy Awards?
    
    Model: Anora won best picture at the 97th Academy Awards.
    See: https://www.wdsu.com/article/2025-oscars-biggest-moments/64003102
    Confidence: medium | Source: Web
    

    Without >>web, that same 4B model said “The Fabelmans.” Then when I pushed it, “Cannes Film Festival.” With web retrieval, the router searches the internet, scores every result deterministically (phrase match + token overlap + domain trust), and only accepts evidence that passes a hard threshold. Garbage results get rejected, not served. The model never touches the answer - it’s extracted straight from the evidence.

    Retrieval cascade: Cheatsheets → Wiki → Web → Model. Each step fires only if the previous one missed. The model is last resort, not first call. Sound familiar?

    I asked it who wrote a paper and it invented “Dr. David J. G. Smith” - a person who does not exist. After wiring DOI/Crossref fallback for academic metadata:

    You: Who wrote The Anatomy of a Large-Scale Hypertextual Web Search Engine?
    
    Model: Sergey Brin and Lawrence Page.
    See: https://research.google/pubs/the-anatomy-of-a-large-scale-hypertextual-web-search-engine/
    Confidence: medium | Source: Web
    

    Deterministic extraction from metadata. No model synthesis.

    >>web is provider-agnostic - ships with DuckDuckGo (no API key, no account) and supports Tavily, SearxNG, or your own adapter. Add your own trusted domains in one config line (there are a bunch baked in already, like pubmed). Every answer comes with a See: URL so you can verify with one click. Receipts, not pinky promises. PS: I even cooked in allow-list / deny-list domain filters, junk-domain blocklist and ad/tracker URL rejection so your results don’t get fouled with low quality spam shit.


  • FWIW Extra shit I cooked last night. It’s live now, so deserves a PS: of its own

    PPS: I inbuilt as spam blocker as well.

    • allow-list / deny-list domain filters
    • DDG-lite junk-domain blocklist
    • ad/tracker URL rejection
    • relevance gate before any provenance upgrade

    Enjoy :) Blurb below

    “But what if it just… Googled it?”

    We can do that. But better.

    You: Who won best picture at the 97th Academy Awards?
    
    Model: Anora won best picture at the 97th Academy Awards.
    See: https://www.wdsu.com/article/2025-oscars-biggest-moments/64003102
    Confidence: medium | Source: Web
    

    Without >>web, that same 4B model said “The Fabelmans.” Then when I pushed it, “Cannes Film Festival.” With web retrieval, the router searches the internet, scores every result deterministically (phrase match + token overlap + domain trust), and only accepts evidence that passes a hard threshold. Garbage results get rejected, not served. The model never touches the answer - it’s extracted straight from the evidence.

    Retrieval cascade: Cheatsheets → Wiki → Web → Model. Each step fires only if the previous one missed. The model is last resort, not first call. Sound familiar?

    I asked it who wrote a paper and it invented “Dr. David J. G. Smith” - a person who does not exist. After wiring DOI/Crossref fallback for academic metadata:

    You: Who wrote The Anatomy of a Large-Scale Hypertextual Web Search Engine?
    
    Model: Sergey Brin and Lawrence Page.
    See: https://research.google/pubs/the-anatomy-of-a-large-scale-hypertextual-web-search-engine/
    Confidence: medium | Source: Web
    

    Deterministic extraction from metadata. No model synthesis.

    >>web is provider-agnostic - ships with DuckDuckGo (no API key, no account) and supports Tavily, SearxNG, or your own adapter. Add your own trusted domains in one config line (there are a bunch baked in already, like pubmed). Every answer comes with a See: URL so you can verify with one click. Receipts, not pinky promises. PS: I even cooked in allow-list / deny-list domain filters, junk-domain blocklist and ad/tracker URL rejection so your results don’t get fouled with low quality spam shit.