Given the title "Search Results Gone Wrong", I'd like to take this opportunity to try to shame Kagi into fixing the search results for the "More results" feature. This simple feature is broken in that it often gives you repeats of the same initial results it gave. That is, instead of giving you "More results", it gives you a lot of "Same results".
I reported this as a bug about 6 months ago, and was quickly told it was planned to be fixed. But it hasn't been fixed. I checked in again a few weeks ago to see if there was any progress, and apparently they've given up because it is too hard: "Apologies, seems I forgot to update the thread. Unfortunately it is in fact trickier than it looks to dedupe these results. Mainly this is a result of how we work with results from upstream sources, and deduping is heavily complicated by caching issues."
Kagi, you're generally great. I'm usually happy to be a paying customer. But I refuse to believe that deduping a list of URL's is actually too hard for you. Maybe I'm one of the few users who actually cares about searching for web pages, but for my use cases my search results would be much better if you actually gave me more results when I click on "More results". How is this not considered core functionality for a search engine? Please fix this!
I feel like that is either hubris (they are overconfident in their ranking) or they have some other reason. Now you bring this up, and it seems to fit with the same kind of thing.
It reminds me of this article which brings up a bunch of suspicious things about search engines, and talks about how weird it is that so many engines limit how far you can go into the results: https://archive.org/details/search-timeline
Somewhat related, Reddit has been broken for me in a very similar way for more than a year now. Whenever I scroll down to load more pages, it will populate with about 80% the same threads as it loaded on previous pages. Over and over, such that by the time I’m on page 6 or so, I will have 6 of the exact same thread.
Really stupid bug that probably only happens with old.Reddit or RES or something. But it’s nice in that it keeps me off of Reddit I guess.
Common bug for caching lists that are reordering all the time.
Unlikely to be fixed though since it causes you to very quickly skip the repeat content (without having to serve you more than headlines) and see more ads. The "bug" multiplies the value they can get from each post which is a very important metric especially as llm slop has started to destroy perceived value from random posts from strangers (reddits only resource).
I'm not saying they introduced it on purpose the way Google intentionally showed bad search results to encourage a second query but I'm not confident that fixing it will be high on the priority list until it makes people leave the site.
As a French person who grew up going to a school in Belgium for a bit as a kid, I was quite amused by their numbers.
My thought as a 6 year old was "aw, are soixante-dix, quatre-vingt, and quatre-vingt-dix too complicated for you?"
Even now, while I think the French numbers make objectively no sense (even the countries that do count in 20s are at least more consistent than us), I can't help but find the Swiss and Belgian numbers "cute". Like "Baby's first 70 to 99".
And for whatever reason, I don't have the same opinion about 70-99 in English, Portuguese or Spanish.
Edit: just to be clear, I think my thoughts about it are absurd but they're too deeply engrained and decades old to shed completely.
It’s a well-known phenomenon that with the internet and modern media, large countries’ version of a language can affect the speech of the smaller countries using that language. Think kids in Portugal today growing up using lots of Brazilian words to their parents’ dismay, or americanisms slipping into UK speech. This makes me wonder if any young Vallon French speakers have started to pick up standard French higher numerals.
If we use the strict definition of organic results in SERP, these aren't the result of webpage indexation, they're the output of widgets and other natural language parsing in Kagi.
Heres one I found: search for "spaceweather" and you get weather for East Derry, New Hampshire. Definitely not space. The results I need are one and two for links below, but a friend pointed out that there is an astronaut (Alan Shepherd maybe?) who lived there which is the only connection to space I can think of for that city.
I’ve enjoyed it. It’s the first time I’ve not been left wanting by a search alternative. If it were to go away tomorrow, I’m not sure what I’d do for search.
This. Or quora or pinterest or twitter/x or etc etc
I can outright block domains or just adjust their weight. Great for my personal prefs but also huge with the family account and helping keep the BS out of sight for the kids without going full restrictive.
I try not to buy from US companies these days, but Kagi is really so good that I make an exception here, despite the US government getting some of my money.
It's a matter of scale. Objectively, yandex is a great resource, and Kagi's results would be degraded without it. Pennies per user go to them. The sum of the entire money that has ever transferred from Kagi to yandex is what? 30 seconds of EU oil and gas purchases?
I use duckduckgo and live in a neighboring country, so I know Russian well (thanks, imperialism) and have to search things in it from time to time. It's still good at those queries, so this is just an excuse.
My wife and I got the duo package because we do a lot of writing and need citations and sources. Compared to google and DDG it is less noisy and returns fewer spammy pages. We’re giving it a year to see if it is worth it.
Absolutely, yes. It has completely replaced Google for search for me. Good AI search as well if you're into that (but they don't force you to use it!).
I think generally yes. I tried it out for free for a while, found it was substantially better than Google and DuckDuckGo, and paid for a subscription.
Recently it has not had such a strong quality margin, which I suspect is due to the AI slop that all of the search engines are fighting against (due to errors both ways in their detection). I'm hoping this is temporary.
To be clear, I don't use any of their features except search (and domain filtering).
I think so. I switched to it because I have YouTube tv - essentially Google as cable tv provider - and noticed how commercials became too correlated with recent Google searches for comfort. The only time I end up switching back to Google is for looking up local businesses reviews.
Yes: you get reliable source information and don’t get inaccurate summaries. E.g. last week I used Gemini to answer a plant biology question and got two contradictory answers based on minor variations in the wording because it incorrectly relied on blog spam over peer-reviewed articles for the first query.
The initial false answer was baldly asserted by the LLM without sources in the first two paragraphs but some of the phrasing it used was enough to locate the non-authoritative blog content it was apparently laundering. Had it accurately cited sources, it would’ve been easy to see that this random WordPress site saying X wasn’t as authoritative as the PubMed hits saying !X.
I second the utility of the Kagi Assistant. I didn't think I would use it much but now do so constantly. Especially because ending a regular search query in a question mark will cause the results page to lead with the Assistant answer! It's a delightful way to try both search and LLMs in one UI interaction.
I reported this as a bug about 6 months ago, and was quickly told it was planned to be fixed. But it hasn't been fixed. I checked in again a few weeks ago to see if there was any progress, and apparently they've given up because it is too hard: "Apologies, seems I forgot to update the thread. Unfortunately it is in fact trickier than it looks to dedupe these results. Mainly this is a result of how we work with results from upstream sources, and deduping is heavily complicated by caching issues."
Kagi, you're generally great. I'm usually happy to be a paying customer. But I refuse to believe that deduping a list of URL's is actually too hard for you. Maybe I'm one of the few users who actually cares about searching for web pages, but for my use cases my search results would be much better if you actually gave me more results when I click on "More results". How is this not considered core functionality for a search engine? Please fix this!
Here's the bug report: https://kagifeedback.org/d/7022-clicking-more-results-yields...
reply