It takes real effort to maintain a solid understanding of the subject matter when using AI. That is the core takeaway of the study to me, and it lines up with something I have vaguely noticed over time. What makes this especially tricky is that the downside is very stealthy. You do not feel yourself learning less in the moment. Performance stays high, things feel easy, and nothing obviously breaks. So unless someone is actively monitoring their own understanding, it is very easy to drift into a state where you are producing decent-looking work without actually having a deep grasp of what you are doing. That is dangerous in the long run, because if you do not really understand a subject, it will limit the quality and range of work you can produce later. This means people need to be made explicitly aware of this effect, and individually they need to put real effort into checking whether they actually understand what they are producing when they use AI.
That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.
And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.
That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.
And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.