Loading Stories...
Loading Stories...
https://webcache.googleusercontent.com/search?q=cache%3Ahttp...
"It's a good thing copyright law exists to prevent AI from doing anything useful because I'm fearful of AI."
"Hey, does anyone know why the web has become so boring and useless???"
Who knows – maybe Google Search won't even show page descriptions in the coming years.
A paradox of politics is that the desire to not talk about politics favors the status quo, necessitating the need to talk about politics.
It's obvious to me that AI is being rolled out in just exactly the wrong way at multiple levels, creating misaligned incentives and a perversion of original goals. But the knee-jerk reactions to it that drive the creation of draconian policies were predicted almost a century ago.
It seems that everything we hold sacred is under attack in these times. The smart bet seems to be on ensh@ttification. Because the organizations that were formerly stewards of online freedom have abdicated their responsibility.
I find it helpful to remember that every misstep by established players creates an opportunity for newcomers to compete.
Please take me back to Google 10 years ago when they actually had working products instead of experimenting with Ai.
I guess this feature was too useful for their users..
It was so useful for when a client deleted or changed a page in their CMS and had no recovery position or backup!
And by Client I mean Me most of the time LOL
Cache is needed for machine translation of PDFs. This change has made it difficult to read PDFs written in other languages.
You can try it out, just add a new bookmark and paste this:
"javascript:var url = new URL(location.href); location.href = 'https://webcache.googleusercontent.com/search?q=cache:%27 + url;"
I wrote a browser extension back in 2005 or so called Commoncache to help the user view a page when slashdot hugged a site to death. It used a fallback mechanic where it would try Google cache, then the wayback machine, and finally coralcache.
It was minorly popular and even included on a cd in an issue of MacWorld magazine.
I have absolutely no idea what's happening with Google's project managers these days, or whoever in the company is making product decisions. Thousands of highly intelligent and highly paid staff just keep making their core product and associated features increasingly user hostile.
YouTube search is an absolute travesty. Google search just floods the first page with various cards that take up previously useful space and doesn't add value to the simple need to find answers to questions.
I firmly believe that only relying on A/B testing for feature launches reached peak usefulness years ago. It's like everyone forgot to see if new features are a benefit to users at a human level simply because 51% of people click more on B, while A is the better experience for everyone.
Has been a slightly useful thing especially for older links to see even barebones text-only for what might have been on a site without having to go to IA Wayback Machine etc.
It's odd because they still have the mortgage calculator. I can only imagine it was some weird agreement between Google and car companies to remove it.
Could you tell your decision makers to stop making such stupid, user-hostile decisions?
ruth porat joining was the day that old google died.
Instead of "AI"(be real, llms and stable diffusion) being developed openly and studied without a profit motive, it's been thrust into the cheap world of VCs looking to extract any profit whatsoever from it, nearly immediately.
This has caused these "AI" tools to be used to steal and rehash artists work(cheapening artists because tech bros and MBAs resent art and the time it takes to perfect). Cheapening human labor(why pay a person to understand anything when you can have an llm do 20% of the job and worse).
These AI tools were thrown in the deep end with a singular purpose: Cheapen what was thought to be protected from computers, while not providing any real value to the layperson.
The average person can maybe get a funny joke, a bad few lines of code, or an ugly bespoke AI image for their medium article, but the true winners are the ones cutting jobs en masse before the tech has even matured, so both the employee and the customer gets a worse product while the MBA's show a solid quarterly report after they ran a knife across their workforce's neck.
Those with power and money have continued to show they will not use technology for any positive societal purpose until they are forced to with regulation. So we're forced to neuter the technology before it can really develop. It's like one child playing violently with a toy, forcing the teacher to take the toy away from everyone else.
First: "We should move the cache link one menu click deeper, we don't have room here"
(No one can easily find it now)
Later: "Wow, no one uses cache, guess we should remove the link!"
Probably just a ham-fisted 'change for the sake of change thing'. """Reimagining""" search can often get boiled down to the least common agreeable set of things across 3000 people. This is "simplification"
See https://klse.i3investor.com/web/blog/detail/future_tech/2024...
and
https://www.cnbc.com/2024/01/30/alphabet-googl-q4-earnings-r...
(The Internet Archive can't be counted on to remain unchanged either, but at least the Internet Archive will either remain unchanged or disappear, never change. I wish archive.org didn't auto-nuke entire sites based on robots.txt, because when domains disappear the domain squatters often seem to use robots.txt files for some reason.)
The cache links had a near-zero click and that ruined the engagement metrics of a project manager?
Yeah. it's worth a few bucks.
They killed reverse image search for Lens, which is borderline useless.
Image search no longer has its date range filter. You can still use the undocumented keyword, but who knows when they'll take that away.
Search results are increasingly irrelevant. Yesterday, I was searching for news articles about the late 00s capture of an Al Qaeda leader who was tracked down in an unusual way. Amongst the results were Visa's careers page. (As a side note, I then asked ChatGPT about the event and it hallucinated in all 3 of my attempts.)
Edit: also, easy access to non-paywalled content gives you a massive trove of training data for machine learning models. Even if these aren't the main reasons for this feature disappearing, they're pretty convenient side effects.
https://www.poynter.org/reporting-editing/2014/today-in-medi... https://www.youtube.com/watch?v=AIhy0T7Q48Y
We can see the same thing with other newer media; Ex. video games. I don't have solutions per se, though I wish this pattern could be different. The profit motive and appealing to base instinct seems to always "win".
I’m not sure why people seem to believe AI is different or special or what leads them to believe you can stop it anymore than the automated loom or the combine harvester.
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.'”
― Douglas Adams, The Hitchhiker's Guide to the Galaxy
Give them a few years at this rate, and they'll move to financialization.
A decade after that, they'll divest most of their assets and switch to providing services, like IBM, except they'll probably try doing it without adding a service team, which should make for some fun satire from The Register.
If you need cached versions of websites, just use the InternetArchive and make sure to donate.
You mean Firefox was.
Chrome: 1)Right click image 2)Can't save as because Google decided so, just like Apple's mentality.
Imagine actually utilizing a browser where the maker decides you should not have access to basic features.
<https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...>
<https://blog.archive.org/2016/12/17/robots-txt-gov-mil-websi...>
But you probably can somehow move the link to a more prominent position as Kagi does support custom css that is served with your search results. [2]
1 https://help.kagi.com/kagi/features/website-info-personalize...
2 https://help.kagi.com/kagi/features/custom-css.html#customiz...
[1] https://help.kagi.com/kagi/search-details/search-sources.htm...
Can't imagine that takes much maintenance.
I've archived 100s to low 1,000s of my own personal contributions on a few online services to both sites.
For Archive Today, it is possible to expedite the archival process by generating the initial submission URL, though you'll have to complete another two steps after that point manually as I recall. If you're archiving a large set of sites, you can compile a list, generate a Web page off of that, and work through it at a pretty good clip.
________________________________
Notes:
1. The URL format being
https://web.archive.org/save/<URL-to-save>
You can submit that via a script using any HTTP request generator, e.g., curl, wget, w3m, lynx, etc.I remember being in an interview with a googler and they posed some contrived problem which, in restrosoect, I realized they intended I solve using URL re-writes (so all result clicks run through Google rather than direct to the desired site). This was before that was the norm. It was appalling - there's no way I would have entertained such an approach due to the way it breaks the user's expectations about how links work (not to mention degrading their privacy).
Today I can't copy a 'bare' news site link without extra steps or properly rely on the back button, and I wish I could find that guy and slap them across the head for making my Internet a shittier place. </rant>
"AI is doing a thing which we have already been doing for most of the internet's existence; this thing is central to the internet as we know it works today."
Versus
"AI is doing a thing that we stopped doing proactively because we thought it might be illegal, which also means it's clearly not that important to the web, please let us keep doing it for AI training."
The target of my comment is specifically the "well profit=good" crowd on here.
It's to point out the innate contradiction in how we speak about technology compared to the guaranteed outcome in our system.
We're doomed as a species if we keep believing in the magical market as a primary mover for said species. We'll stall in a circlejerk of ads and stock buybacks and never accomplish anything, because any good use for technology is locked behind a gate due to the lack of profit.
Any improvement in medicine is behind lock and key because the pharmaceutical companies "need to make back their investment" despite massive gov funding.
We're killing ourselves here with the spectacle that this system is either working or worth saving.
This can be translated to modern day.
They say that what makes them usable, their results most topical, is their own index.
They also list Google as an ancillary, as well as many others.
But the "feel" is more from the UI, and the relevance of the results. Both of these ways Google completely fails on now.
I didn't realize it was a rebalancing. But seriously, I have no regrets. if nothing else, the incentive alignment matches my values better.
Interesting, well we know how that ended with google AMP. It's good that we have people that think like you. Sadly there's always someone else willing to just take the money and implement it. I'm grateful for the community and the hobbyists that build workarounds and alternatives (e.g. searxng), and I contribute where I can. I think that's the only real solution at the moment.
MaidSAFE
Freenet
Hypercore
Yes, I know. Some companies have abused the social contracts of the web which were in place since its inception, and told that they are doing something amazing so they need no permission to do what they're doing.
So the web has responded the way it should.
However, Firefox was smart enough to not be tricked by that.
That's the right move if you side with the consumer and the wrong if you side with the producer. Google is fundamentally highly elitist and that's a pretty easy framework to use if you want to guess their actions.
If it was not for Kagi putting some sanity back into search, Google and Reddit going to shit would have had me seriously reconsider this career and spending so much time online.
I think it is abhorrent borderline criminal to place yourself at the centre of the Internet experience, and one day decide to make it shittier for everybody because of short sighted lust for money and inept PMs.
Or, with DDG, append !r
I long for the internrt of old. Everything is enshittified
What I'm worried about is a new set of legal precedents that will make it impossible for Archive to legally rehost the content they have scraped.
HyperCore apparently got acquired and they are a company seeling solutions to businesses.
Freenet 2023 is a FOSS project. I'm watching the matrix server for a while. Ian says they're launching the network in 2 weeks. It is a decentralized data store + runtime. So while the original Freenet was analogous to disk, Freenet 2023 is analogous to an entire computer. See https://freenet.org/
But how will they know what you are shopping for?
Our current military industrial project is just a self fulfilling fantasy. Stop the contracts for war, create contracts and research for moving humanity forward without the need to strap it to a missile 20 years before it hits shelves. Keep IP rights publicly owned, license them to anyone and use the licensing fees to fund new research.
I don't think that Reddit was ever good. (The upvote/downvote system stifles real discussion, and even normal people treat it like a game -- exaggerating and making up stories for upvotes.) But today it's unambiguously terrible.
Putting into words the thought I’ve been having recently.
I think Google is winding down on search. I wonder how important it is to them these days. There’s no way being this crap is not intentional.
Similarly, !r uses reddit's search
You should read their primer: https://primer.safenetwork.org/
It's completely uncensorable and unstoppable.
They encrypt every chunk of data, using "self-encryption." They don't require a manual market for hosting (like IPFS) so people can't be intimidated into not hosting something.
They even have their own implementation of the DHT which removes IP addresses after the first hop, so you can't discover the whole network and DDOS it / block it (which is not true of HyperCore, IPFS, etc.)
Ever notice people posting completely broken markdown? That's because they use new reddit, which disagrees about markdown newlines. I would guess >70% of programmers/tech people are using new reddit.
>> You're likely to want to store data on the Network. Why? Because in return for a very small one-time payment, your data will then be stored forever, encrypted and accessible anywhere in the world and only by you—unless you choose to share it.
No different than IPFS.
>> Safe Network Tokens are the incentive mechanism that encourages individuals to provide the computing resources that the Network requires: storage, broadband, and CPU resources. ... Individuals who choose to supply the resources that the Network requires have the opportunity to be rewarded with Safe Network Tokens. This work ensures that the Network rewards those who provide it with valuable resources.
> They encrypt every chunk of data, using "self-encryption."
>> Next, let's talk encryption. Imagine you want to store a photo. That data is protected by a number of layers of encryption. Your photo starts by being broken into pieces which are then encrypted with the other parts of that same file. This 'Self-Encryption' happens before the data ever hits the Network. So, unless you choose to override it, none of your data touches the Network unless it is encrypted. And it’s designed so that you're the only one that ever holds the key.
If you hold the key, why bother encrypting the data with itself? TBH the entire thing reads like a new crypto.
> They even have their own implementation of the DHT which removes IP addresses after the first hop, so you can't discover the whole network and DDOS it / block it (which is not true of HyperCore, IPFS, etc.)
This also makes large parts of the network unreachable. Freenet achieves DDOS protection without this extreme measure[2]. It also allows development of all sorts of apps, it's not only for data storage.
1: https://safenetwork.tech/how-it-works/#where-is-data-stored
I haven't learned to find product reviews yet, but I imagine that a trick would be to know which forums to go for certain kinds of products, e.g. maybe https://xdaforums.com/ for Android devices.
There is also the search engine https://boardreader.com/ which ONLY searches proper forums--sometimes you find neat threads, sometimes not.
You’re wrong. SAFE is autonomous, meaning no one has to agree to host your thing, and no one has to agree to pay you for hosting. You just spin up a node and earn coin. With IPFS there are manual deals, it’s not clear how to actually get paid for your file space, and they even opened up a new tier that pays 10x filecoin block rewards for hosting data they consider “important”, but the hosts for that data don’t charge any fees and despite being quite tech-savvy I couldn’t figure out how to start earning FileCoin for hosting, using their program at all! Do I just pin the IDs of data which is going to possibly earn me 10x block rewards, or do I need to advertise and get someone to whitelist me for a “deal”? (Did anyone here do it successfully?)
> This also makes large parts of the network unreachable. Freenet achieves DDOS protection without this extreme measure[2]. It also allows development of all sorts of apps, it's not only for data storage.
Not that kind of DDOS. If I know the IP addresses of all the nodes on the network, I can flood them with traffic via the DHT leaking their IP.
Freenet does antiflood tokens on the level of their protocol, but I can DDOS their ports and subnets without conforming to the protocol. SAFE doesn’t let you even send a message to the end-computer’s IP because there is NO ADDRESS on any protocol (including BGP) that would route it there. That makes the network harder to block.
Also, I fail to see how this makes parts of the network ureachable. It’s just that I need the neighbors of a node to agree to forward info to that node. Unless you mean eclipse attacks?
> If you hold the key, why bother?
That’s a very good question… this is the only patent their team ever made… the data is encrypted using its own contents instead of relying on an external key, to aid in things like deduplication as multiple people storr and encrypt the data, while each has their own key to access it: https://youtu.be/Jnvwv4z17b4?si=oHM96fBjCVGaGmSN
It’s not a “new crypto”. That video is 9 years old, and SAFE network started before Bitcoin! Just a bit after Bittorrent. They started in 2006 and are still working on it.
Self-encryption patent? Or the functioning of the network in general?
Im new here so unsure of the policy on links, but please search for safenetform. Folks there will be delighted to answer your queries. This project is very different from most and it can take a little effort to grasp it. That effort is very much worth it.
This boardreader search engine sounds promising though! I'll give that a try next time I'm shopping in an unfamiliar market.
This claim is what needs something to back it up. Otherwise it's hard to believe.