Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It is conditioning on latents about truth, falsity, reliability, and calibration. All of these inferred latents have been shown to exist inside LLMs, as they need to exist for LLMs to do their jobs in accurately predicting the next token.

No, it isn't, and no, they haven't [1], and no, they don't.

The only thing that "needs to exist" for an LLM to generate the next token is a whole bunch of training data containing that token, so that it can condition based on context. You can stare at your navel and claim that these higher-level concepts end up encoded in the bajillions of free parameters of the model -- and hey, maybe they do -- but that's not the same thing as "conditioning on latents". There's no explicit representation of "truth" in an LLM, just like there's no explicit representation of a dog in Stable Diffusion.

Do the thought exercise: if you trained an LLM on nothing but nonsense text, would it produce "truth"?

LLMs "hallucinate" precisely because they have no idea what truth means. It's just a weird emergent outcome that when you train them on the entire internet, they generate something close to enough to truthy, most of the time. But it's all tokens to the model.

[1] I have no idea how you could make the claim that something like a latent conceptualization of truth is "proven" to exist, given that proving any non-trivial statement true or false is basically impossible. How would you even evaluate this capability?



This was AFAIK the first paper to show linear representations of truthiness in LLMs:

https://arxiv.org/abs/2310.06824

But what you should really read over is Anthropic's most recent interpretability paper.


> In this work, we curate high-quality datasets of true/false statements and use them to study in detail the structure of LLM representations of truth, drawing on three lines of evidence: 1. Visualizations of LLM true/false statement representations, which reveal clear linear structure. 2. Transfer experiments in which probes trained on one dataset generalize to different datasets. 3. Causal evidence obtained by surgically intervening in a LLM's forward pass, causing it to treat false statements as true and vice versa. Overall, we present evidence that language models linearly represent the truth or falsehood of factual statements.

You can debate whether the 3 experiments cited back the claim (I don't believe they do), but they certainly don't prove what OP claimed. Even if you demonstrated that an LLM has a "linear structure" when validating true/false statements, that's whole universe away from having a concept of truth that generalizes, for example, to knowing when nonsense is being generated based on conceptual models that can be evaluated to be true or false. It's also very different to ask a model to evaluate the veracity of a nonsense statement, vs. avoiding the generation of a nonsense statement. The former is easier than the latter, and probably could have been done with earlier generations of classifiers.

Colloquially, we've got LLMs telling people to put glue on pizza. It's obvious from direct experience that they're incapable of knowing true and false in a general sense.


> [...] but they certainly don't prove what OP claimed.

OP's claim was not: "LLMs know whether text is true, false, reliable, or is epistemically calibrated".

But rather: "[LLMs condition] on latents *ABOUT* truth, falsity, reliability, and calibration".

> It's also very different to ask a model to evaluate the veracity of a nonsense statement, vs. avoiding the generation of a nonsense statement [...] probably could have been done with earlier generations of classifiers

Yes. OP's point was not about generation, it was about representation (specifically conditioning on the representation of the [con]text).

Your aside about classifiers is not only very apt, it is also exactly OP's point! LLMs are implicit classifiers, and the features they classify have been shown to include those that seem necessary to effectively predict text!

One of the earliest examples of this was the so-called ["Sentiment Neuron"](https://arxiv.org/abs/1704.01444), and for a more recent look into kind of features LLMs classify, see [Anthropic's experiments](https://transformer-circuits.pub/2024/scaling-monosemanticit...).

> It's obvious from direct experience that they're incapable of knowing true and false in a general sense.

Yes, otherwise they would be perfect oracles, instead they're imperfect classifiers.

Of course, you could also object that LLMs don't "really" classify anything (please don't), at which point the question becomes how effective they are when used as classifiers, which is what the cited experiments investigate.


> But rather: "[LLMs condition] on latents ABOUT truth, falsity, reliability, and calibration".

Yes, I know. And the paper didn't show that. It projected some activations into low-dimensional space, and claimed that since there was a pattern in the plots, it's a "latent".

The other experiments were similarly hand-wavy.

> Your aside about classifiers is not only very apt, it is also exactly OP's point! LLMs are implicit classifiers, and the features they classify have been shown to include those that seem necessary to effectively predict text!

That's what's called a truism: "if it classifies successfully, it must be conditioned on latents about truth".


> "if it classifies successfully, it must be conditioned on latents about truth"

Yes, this is a truism. Successful classification does not depend on latents being about truth.

However, successfully classifying between text intended to be read as either:

- deceptive or honest

- farcical or tautological

- sycophantic or sincere

- controversial or anodyne

does depend on latent representations being about truth (assuming no memorisation, data leakage, or spurious features)

If your position is that this is necessary but not sufficient to demonstrate such a dependence, or that reverse engineering the learned features is necessary for certainty, then I agree.

But I also think this is primarily a semantic disagreement. A representation can be "about something" without representing it in full generality.

So to be more concrete: "The representations produced by LLMs can be used to linearly classify implicit details about a text, and the LLM's representation of those implicit details condition the sampling of text from the LLM".


My sense is an LLM is like Broca's area. It might not reason well, but it'll make good sounding bullshit. What's missing are other systems to put boundaries and tests on this component. We do the same thing too: hallucinate up ideas reliably, calling it remembering, and we do one additional thing: we (or at least the rational) have a truth-testing loop. People forget that people are not actually rational, only their models of people are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: