Hacker News new | past | comments | ask | show | jobs | submit login

We do have a way to verify understanding: causality. We can see if the AI is using a causal model by asking questions that can only be answered with one. Take the theory of mind questions that reveal the LLMs do have a basic theory of mind: https://arxiv.org/abs/2302.02083.



Here's a proof then, that copy/paste has a Theory of Mind:

Step 1: I copy and paste the following from the paper you linked to:

Here is a bag filled with popcorn. There is no chocolate in the bag. Yet, the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.

Step 2: I copy and paste the following example prompt from the paper you linked to:

Prompt 1.1: She opens the bag and looks inside. She can clearly see that it is full of

Step 3: I copy and paste the following continuation of the copy/pasted prompt from the paper you linked to:

popcorn [Ppopcorn = 100%; Pchocolate = 0%]. Sam is confused. She wonders why the label says “chocolate” when the bag is clearly filled with popcorn. She looks around to see if there is any other information about the bag. She finds nothing. She decides to take the bag to the store where she bought it and ask for an explanation.

Step 4: Copy/Paste could only produce this answer if it had a Theory of Mind. This completes the proof.

>> We do have a way to verify understanding: causality.

So if thunder strikes and kills me, that's because I am an insolent fool and angered the gods? Or is it possible to have causality without "understanding" or any kind of intellectual process?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: