- cross-posted to:
- techtakes
- cross-posted to:
- techtakes
cross-posted from: https://awful.systems/post/2178630
deleted by creator
Google results are actually already pretty terrible. They just have tremendous inertia.
I stopped using them months ago. I only notice when I’m looking for places (e.g., restaurants, barbers).
I’m not unhappy but may still shop around.
yeah, I appreciate the push towards more privacy-centric search engines but as a result searches that are relevant to me geographically on places like startpage are next to useless. I understand why but I wish that local results were a bit better on the alternatives.
We all keep saying this but can anybody point me to which one is better?
I invariably end up having to go back to them because the other search engines all have their own problems.
The issue is the internet is polluted with SEO and all the useful things that used to be spread out are now condensed onto places like Reddit, or places that aren’t even being indexed.
Supposedly there’s a paid one that is good. I haven’t tried. The thing is Google is completely enshittified. They don’t have to care about you or the sites you search. So my theory is Bing is better because they are hungrier and anything that takes away market share from Google is good—but I’m fully aware that Microsoft was just as shitty as Google and will be again if they get back on top.
Everything else I know of is either just an alternate front end for one of them or an aggregator of both. So you’re right, there’s precious little alternative to Google. But it’s almost bad enough I’m ready for the return of web rings of good sites vouching for each other.
I assume you’re talking about kagi. I pay for their $5/month subscription and it’s great
Serious question. Can you spell out for me the exact advantages you feel they provide? I have a free account, but every time I try them out I feel like their answers are honestly a little bit worse than Ecosia.
I find Kagi results a little bit better than Google’s (for most things). I like that certain categories of results are put in their own sections (listicles, forums) so they’re easy to ignore if you want. I like that I can prioritize, deprioritize, block, or pin results from certain domains. I like that I can quickly switch “lenses” to one of the predefined or custom lenses.
Ok I don’t think I’m doing it right then. Is there a tutorial?
my theory is Bing is better because they are hungrier and anything that takes away market share from Google is good
If you think Microsoft is in the business of innovation and healthy competition, you’re wrong.
deleted by creator
A lot can change in 13 years, but a company that starts off morally evil does not magically get better as time goes on. If anything, they’re worse - we just don’t have the luxury of knowing exactly how yet.
deleted by creator
If that was what you took from my post, give it another read. I’m not pro MS. I’m pro not feeding Google. And Bing is fine.
I’m not pro MS. I’m pro not feeding Google.
I feel like this is similar to arguing that Exxon is bad so it’s better to buy gas from BP.
Both are shitty options.
Thankfully there are other options because you just nailed the two places I refuse to ever get gas from when there is any other option. If there was a good third option I’d take it here, but while Google commands so much market share and a new competitor would probably siphon users from Bing (and it’s not enough users) I don’t think a real alternative will come. I’m intrigued by kagi, though.
Part 5 is where I don’t see this actually going.
Look at twitter. Now look at mastodon. Tell me which one is more shitty. Now tell me which one has something like 85% of the market, and which one most people haven’t heard of.
Just because something it better, doesn’t mean people use it. You can fit all of Lemmy in the world in one of the larger NBA size arenas. You can’t even fit twitters total user base into some smaller CITIES.
I think the amount of people who are familiar with search engine options besides Google is quite a bit larger than the population of Lemmy. (It fuckin better be, anyway)
I’m pretty pessimistic about this:
- Say no
- Google still scrapes your site to train their AI
- People don’t care that its wrong, still use Google instead of other search engines
- Say no
- You don’t show up in Google search results
- You still show up in other search results
- Google is no longer bringing the best results
- People stop using your site
- You lose
Unfortunately, the vast majority of people do not give a single fuck and they will use whatever is preinstalled on their device
Let’s DuckDuckGOOOOOOO!
For now.
DDG gets search results from Bing, owned by Microsoft. And I wouldn’t be surprised if the later did the same as Google did.
That’s technically true, but it’s as misleading as saying they get their search results from Yandex. Their results are aggregated from several search engines, not just Bing. They also have their own web crawler, DuckDuckBot, which absolutely respects RobotRules.
Edit: I’m told my information is out of date. No more Yandex because of Uncle Sam. Yahoo is just Bing now, so that index doesn’t count anymore. The bulk of the rest of their sources are largely inconsequential specialized search engines. Their sources page states that they “largely source from Bing”.
Fair point.
Where is your evidence for that? It used to be Bing and Yandex, but now it’s just Bing. They use other non search engine APIs and do a small amount of crawling AFAIK. Details of who uses what here: https://seirdy.one/posts/2021/03/10/search-engines-with-own-indexes/
I read it on their sources page at some point, but it looks like that page has changed since last I looked.
There’s an article from this spring’s 2600 magazine that claims it’s all Bing results. I haven’t dug through their Python code and I’m definitely no expert anyway, but I’d also prefer not to post an article from a small independent magazine like that to let the people on Lemmy who do know more than me take a look.
DuckDuckGo is just Bing. Which is uh… going from Google to Microsoft. Maybe not much better either
We’re at a point where not only should the Internet be classified as a utility, so should Search.
Yeah, it’s not just e.g. water that is the utility, pipes and pumping stations are part of it. Otherwise you have water…uh…somewhere, go get it yourself.
Can someone explain why the fuck Google is pushing this so hard? Generative AI is not a general intelligence, and useless for concrete facts. Google has already demonstrated how shitty it is for information, and the people with the knowledge to work on the project have to know this.
So why the fuck are they all full steam ahead on something that will always be useless for them?
Because the engineers aren’t in charge anymore
AI is hype.
They’ve recently signed a deal with Reddit for AI parsable data. Reddit reciprocated by allowing Google to be the only indexable search engine.
Google now thinks it can do the same to literally everyone else.
Googling is pretty damn mainstream.
Don’t give Google your data, then don’t be included in googles search results. It’s like a flip of their previous trade with reddit, except it’s not a trade. It’s extortion.Reddit never gave Google traffic. They gave them content and data.
And Google thinks it can withdraw traffic from other sites unless they get data in return.
Google is a monopoly.
Literally extortionMagic beans.
Their line goes up when they show they’re investing in AI, and it goes down when it looks like they’re falling behind or not investing enough in it.
TBH, a lot of times I find myself interacting with ChatGPT instead of searching. It’s overhyped, but it’s useful.
Been on ddg for a few months now. Doesnt look like i need to go back either
I found ecosia faster and better results. Just letting you know in case you want to try
I would try it but its tied to microsloths bing for results
Isn’t DDG also tied to Bing? I could be mistaken.
Dont think so. Havent examined the code though
Just looked it up to confirm. From DuckDuckGo’s page on the topic:
Most of our search result pages feature one or more Instant Answers. To deliver Instant Answers on specific topics, DuckDuckGo leverages many sources, including specialized sources like Sportradar and crowd-sourced sites like Wikipedia. We also maintain our own crawler (DuckDuckBot) and many indexes to support our results. Of course, we have more traditional links and images in our search results too, which we largely source from Bing. Our focus is synthesizing all these sources to create a superior search experience.
Edit: That said, I’d rather use DDG than Bing because DDG eats Bing’s tracking for me, as I understand it.
O well. Its just not possible get totally away from the big dogs
Bing’s results are superior to Google these days ime. Has been for a good while too.
Ddg are shit too, search a name and they will relate it you locally even if you turn off regional results.
Click a link and go back to results and they have changed.
Ddg is enshittifying.
Well it gets me here and to .ml so i cant complain
Lol… Peasants will accept it…
Sundar the creep knows it.
I’ve been really happy with Kagi since switching.
Same! I swore I wouldn’t pay for a search engine, but I feel like it’s absolutely worth it, considering the current state of things.
It’s definitely better…but. Thanks to Google SEO the internet it’s bringing you results from is still filled with shit
Correct me if I’m wrong but doesn’t Kagi still uses google/bing results?
I might be wrong, but they meta-search across multiple providers, including their own. The real benefit is that YOU can choose which search subjects to prioritize when trying to find something specific.
For normal search stuff, this feels like “old Google” (no ai spam). For detailed searching, its better than any other engine I’ve used.
Same. It’s amazing. I really like the feature where you can prioritise or deprioritise search results from certain websites.
100%. Best option ever
So how do I actually opt out? My website is just some personal hobby stuff on wordpress that only friends and family look at, I don’t need seo.
You should put these entries into your robots.txt file.
To block the Google search crawler use for all of your site:
User-agent: Googlebot
Disallow: /
To block the Google AI crawler use:
User-agent: Google-Advanced
Disallow: /
You rock, thank you!
What if I made a static site using Github pages hosting? Will having a robots.txt in my root folder ward off Google bhoots (devils)?
Yes.
We do really need to figure how to make some kind of decentralized search engine.
Some discussion on that here: https://lemmy.world/comment/11859761
I hope it happens one day, but that’s an almost insurmountable task given the scale.
Take the entirety of the fediverse, and it’s entire history, and you’re probably talking a days worth of search engine indexing compute & storage.
The scale is large and the fediverse is incredibly small. Keeping my fingers crossed, but definitely not holding my breath.
In the meantime, I’ll use Kagi.
If I say no and revoke my consent, and they do it anyway …
Can you afford enough lawyers to prove it?
Justice is the original pay to win game. Seems its out of our budget though.
That’s actually a good news. Maybe we’re able to revert the internet to the times before the Eternal September happened
This will never happen. We might get some of the issues more regulated, and people may move away from others, but you can’t put the Furies back into the box. Things will change, but we will never have the early internet again.
We can, closed communities with some effort to enter the group. I pretty much ditched most main stream social media and use what it used to be mailing lists and discord servers. It’s not about technology. Internet and access used to be simply exclusive and we have to create exclusive channels to communicate about f.e. arts, history, technology or even occult where there is no “free riders” with no knowledge. That’s what I mean and this may happen imo. Quality over quantity
Lemmy was at some point this pre-September place. Since then, it changed quite a bit.
I think we’ll always have to face either Eternal September or walls and restrictions everywhere, making it hard to join and discouraging many genuinely good folks.
Our best best is to influence the Internet culture at large, since there’s no grand influx of people on the Internet overall anymore. Of course, we go against algorithmic rage machines, but this is a fight worth having. And which place if not Fediverse is a good place to start.
I read that as “you can’t put the Furries back in the box” and it still worked
I’ve switched from DuckDuckGo to Ghostery Private search. I’ve been happier with the results than DDG.
I’m using SEARXNG. It’s a search engine aggrigate and you can mix and match where you want your results to come from. It’s like using Google from a decade ago.
Interesting. That seems a fairly heavy duty search and possibly more than most users would want to go about installing. But it’s something to keep in mind if needed.
There are hosted versions you can just use without installing at home.
I’m going to bookmark that and give them a try.
I remember discovering MetaCrawler in the 90s (before Google was even founded) and it quickly became the go-to search engine because its aggregate results were superior to any of the other options at the time. I don’t think its source mix was tunable, but that sounds like appropriate progress for 30 years.
I’m using SEARXNG.
Sounds like the Elon alternative for searching
Nice thanks for this
Please understand that this is the next ‘SEO’ shit.
It was going to be this from the very start.
Google is genuinely bad now. I switched to Ecosia which is just Bing with a simpler front end and they use their profits to plant trees. I don’t think Ecosia is particularly special though. Duck Duck Go, Bing whatever, they’re all better than Google.
Whenever I set up a new computer then search for something, I’m always surprised at first seeing the awful layout and quality of the search results before I realize that I haven’t changed the default search from Google. It’s awful now. Seriously, how are people using it?
My new favorite way to search is perplexity.ai. It’s an AI search tool that summarizes the loads of crap out there so you don’t need to read through the junk that people write. It provides sources, unlike using ChatGPT, which is incredibly valuable. All AIs make shit up, so having links to double check it is a must. Unlike Bing Chat, or whatever Microsoft calls it this week, you can ask follow up questions to home in on what you want.
As I understand it, this is only about using search results for summaries. If it’s just that and links to the source, I think it’s OK. What would be absolutely unacceptable is to use the web in general as training data for text and image generation (=write me a story about topic XY).
If it’s just that and links to the source, I think it’s OK.
No one will click on the source, which means the only visitor to your site is Googlebot.
What would be absolutely unacceptable is to use the web in general as training data for text and image generation.
This has already happened and continues to happen.
No one will click on the source, which means the only visitor to your site is Googlebot.
That was the argument with the text snippets from news sources. Publishers successfully lobbied for laws to be passed in many countries that required search engine operators to pay fees. It backfired when Google removed the snippets from news sources that demanded fees from Google. Their visitors dropped by a massive amount, 90% or so, because those results were less attractive to Google users to click on than the nicer results with a snippet and a thumbnail. So “No one will click on the source” has already been disproven 10 or so years ago when the snippet issue was current. All those publishers have entered a free of charge licensing agreement with Google and the laws are still in place. So Google is fine, upstart search engines are not because those cannot pressure the publishers into free deals.
This has already happened and continues to happen.
With Gemini?
Look at you, changing my mind with your logicking ways. I think information should be free anyway, but I thought media companies were being at least remotely genuine about the impact here. Forgot that lobbyists be lobbying and that Google wouldn’t have let them win if it didn’t benefit them.
The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.
With Gemini?
Yes. How do you think the Gemini model understands language in the first place?
The context is not the same.
It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.
Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.
Yes. How do you think the Gemini model understands language in the first place?
Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.
that latter will be the case rather sooner than later I’m afraid. It’s just a matter of time with Google.
that latter will be the case rather sooner than later I’m afraid. It’s just a matter of time with Google.
If that will actually be the case and passes legal challenges, basically all copyright can be abolished which would definitively have some upsides but also downsides. All those video game ROM decompilation projects would be suddenly in the clear, as those are new source code computer-generated from copyrighted binary code, so not really different from a AI generated image based on a copyrighted image used as training data. We could also ask Gemini write a full-length retelling of Harry Potter and just search, replace all trademarked names, and sell that shit. Evil companies could train an AI on GNU/Linux source codes and tell it to write an operating system. Clearly derived work from GPL code but without any copyright to speak of, all that generated code could be legally closed. I don’t like that.
I really hope those ROM sites will be cleared sooner than later. It hurt a lot to see some of the biggest ROM sites force to close. Please sign: https://citizens-initiative.europa.eu/initiatives/details/2024/000007_en