- cross-posted to:
- technik@feddit.org
- cross-posted to:
- technik@feddit.org
I was super annoyed when they first took away the links. “Pages are more dependably available now,” is such a lazy excuse. Storing the cached content probably wasn’t even that expensive for them, as it didn’t retain anything beyond basic html and text. Their shitty AI-centric web search was likely the main reason for getting rid of it.
deleted by creator
Throw it on the pile. https://killedbygoogle.com
PR open since February: https://github.com/codyogden/killedbygoogle/pull/1481
Google sure does love killing things people love.
“Introducing Google Pets”
Noooooooo!!!
A partnership with Delta.
I definitely miss the cached pages. I found that I was using the feature very frequently. Maybe it’s just the relative obscurity of some of my hobbies and interests, but a lot of the information online that shows up in search engines seems to come from old forums. Often times those old forums are no longer around or have migrated to new software (obliterating the old URLs and old posts as well).
If you’re looking for a replacement, there are a lot of similar apps out there you can host yourself (And therefore can’t be killed) or pay a fee to have hosted for you.
https://linkwarden.app/ Is the one I use.
There’s also:
- https://wallabag.org/ (popular)
- https://linkding.link/ (Very basic)
- https://archivebox.io/
That’s not the same at all. Archivebox would do the trick if it was pre-populated with every page Google Search has in its index.
Well it is, it just doesn’t go out and do all the caching for you ahead of time, instead it’s on demand. You are right that as far as pre populated alternatives go, it’s just archive.org now.
…and everybody was shocked! Absolutely shocked.
Shocked? You’d think all the people outraged at having their websites scraped would be delighted. That’s probably the real reason for this.
It’s not the scraping itself, but the purpose of the scraping, that can be problematic. There are good reasons for public sites to allow scraping.
I have the distinct impression that a number of people would object to the purpose of re-hosting their content as part of a commercial service, especially one run by Google.
Anyway, now no one has to worry about Google helping people bypass their robots.txt or IP-blocks or whatever counter-measures they take. And Google doesn’t have to worry about being sued. Next stop: The Wayback Machine.
At least they are using the internet archive, which is neat
Google’s money is a bit scummy these days, and definitely not something that should be relied upon long term, but I hope Google are making some kind of monetary donation.
It’s unclear if Google is donating anything (It would honestly surprise me if they didn’t) but at least archive.org is happy about this feature and they call it an collaboration: https://blog.archive.org/2024/09/11/new-feature-alert-access-archived-webpages-directly-through-google-search/
google is just gonna slowly fade away like some bad early sci-fi teleporter schtick
It’s a 2-trillion-dollar company, I think news of their coming demise has been exaggerated.
What a disgrace. This clown show of a company kills things people love and pushes advertising no one wants.
Another piece of internet history now gone. Perhaps not deleted, but hidden beneath Goggle’s own archives until they degrade away.
Google cache?
They used to have a “cache” link on search results. It occasionally came in handy when the original site was down or changed their link or something.
It was a tool to see what Google has cached, to check web pages for changes based on Google’s last access.
It also had a nice habit of bypassing those pop-ups that would prevent scrolling. 😂