Hacker News new | past | comments | ask | show | jobs | submit login

1. I don't understand how LLMs work.

2. I don't understand how ChatGPT works, but I have used it a few times.

3. I will use ChatGPT as the absolute measure of what LLMs are capable of.

---

1. I don't understand intelligence.

2. Humans are intelligent, humans can learn to do math.

3. LLMs are not good at math.

4. LLMs are not intelligent, they're just text compression.

---

1. I don't understand how LLMs work.

2. I have a decent grasp of how image compression works.

3. I will use my grasp of image compression to pretend LLMs are text compression.

4. I will apply all limitations of image compression to LLMs.

5. "What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large-language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry jpeg, when you still have the original?"

---

What's funny is that the author has produced exactly what many claim LLMs to be useless for - flowery words that seem true but are not.

I don't think this should've been published.

These are both good reads if you find yourself tending to agree with the author -

- Emergent Abilities of Large Language Models - https://arxiv.org/abs/2206.07682

- Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers - https://arxiv.org/abs/2212.10559v2




You are arguing with your own straw man interpretation of the article. It isn't talking about all possible uses of LLMs, but focusing on specific uses now being proposed, to use ChatGPT and its possible successors instead of search.

You ignore his points about how achieving really good compression requires learning structure in the data that starts to amount to understanding: if you can understand the rules of arithmetic then results of arithmetic operations don't need to be saved, they can be reconstructed accurately. You mistake his pointing out the arithmetic errors ChatGPT makes for a claim that they are fundamental to LLMs, even though in the same article he points out that understanding arithmetic aids compression. And then you say that his article shouldn't have been published.

I will not say that your comment shouldn't have been published, because that is impolite and against the rules.


Who is proposing the use of ChatGPT, in its current form, for search?

Bing search is not just "ChatGPT" added next to bing search results. Please look up how it works, it is quite sophisticated and (imo) designed well.

I have access to it; would you like a demo?


>Who is proposing the use of ChatGPT, in its current form, for search?

The dozens of posts I've seen here saying "this is going to replace google!" for starters.


Well said. Ted Chiang is remarkably smart and imaginative. I’m kind of wondering if the article is satire, or will be revealed to be written by an AI, or something. It’s definitely a forest/trees mistake.


I can't take anyone seriously that has dozens of cons and only a couple pros on a product like this.


What product? You mean the experimental "research release" that was so in-demand that currently people are paying for guaranteed access?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: