2. I don't understand how ChatGPT works, but I have used it a few times.
3. I will use ChatGPT as the absolute measure of what LLMs are capable of.
---
1. I don't understand intelligence.
2. Humans are intelligent, humans can learn to do math.
3. LLMs are not good at math.
4. LLMs are not intelligent, they're just text compression.
---
1. I don't understand how LLMs work.
2. I have a decent grasp of how image compression works.
3. I will use my grasp of image compression to pretend LLMs are text compression.
4. I will apply all limitations of image compression to LLMs.
5. "What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large-language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry jpeg, when you still have the original?"
---
What's funny is that the author has produced exactly what many claim LLMs to be useless for - flowery words that seem true but are not.
I don't think this should've been published.
These are both good reads if you find yourself tending to agree with the author -
You are arguing with your own straw man interpretation of the article. It isn't talking about all possible uses of LLMs, but focusing on specific uses now being proposed, to use ChatGPT and its possible successors instead of search.
You ignore his points about how achieving really good compression requires learning structure in the data that starts to amount to understanding: if you can understand the rules of arithmetic then results of arithmetic operations don't need to be saved, they can be reconstructed accurately. You mistake his pointing out the arithmetic errors ChatGPT makes for a claim that they are fundamental to LLMs, even though in the same article he points out that understanding arithmetic aids compression. And then you say that his article shouldn't have been published.
I will not say that your comment shouldn't have been published, because that is impolite and against the rules.
Well said. Ted Chiang is remarkably smart and imaginative. I’m kind of wondering if the article is satire, or will be revealed to be written by an AI, or something. It’s definitely a forest/trees mistake.
2. I don't understand how ChatGPT works, but I have used it a few times.
3. I will use ChatGPT as the absolute measure of what LLMs are capable of.
---
1. I don't understand intelligence.
2. Humans are intelligent, humans can learn to do math.
3. LLMs are not good at math.
4. LLMs are not intelligent, they're just text compression.
---
1. I don't understand how LLMs work.
2. I have a decent grasp of how image compression works.
3. I will use my grasp of image compression to pretend LLMs are text compression.
4. I will apply all limitations of image compression to LLMs.
5. "What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large-language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry jpeg, when you still have the original?"
---
What's funny is that the author has produced exactly what many claim LLMs to be useless for - flowery words that seem true but are not.
I don't think this should've been published.
These are both good reads if you find yourself tending to agree with the author -
- Emergent Abilities of Large Language Models - https://arxiv.org/abs/2206.07682
- Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers - https://arxiv.org/abs/2212.10559v2