In theory this makes sense, but in practice now that LLMs are writing the PR summaries we just have even more slop to wade through to figure out exactly what the change is trying to achieve. I think the slide in this direction already started with exhaustive PR templates that required developers to fill in a whole bunch of fluff just to open their PR. The templates didn't make bad developers good, it just caused them to produce more bad content for review.
My experience with LLM-generated summaries is the same as it was with the templates: many complete them in a way that is entirely self-referential and lacking in context. I don't need a comment or a summary to describe to me exactly the same thing I could have already understood by reading the code. The reason for adding English-language annotations to source code is to explain how a particular change solves a complex business problem, or how it fits into a long-term architectural plan, that sort of thing. But the kinds of people who already did not care about that high level stuff don't have the context to write useful summaries, and LLMs don't either.
The worst thing I've seen recently is when you push for more clarity and context on the reasons behind a change, and then that request gets piped into an LLM. The AI subsequently invents a business problem or architectural goal that in reality doesn't exist and then you get a summary that looks plausible in the abstract, and may even support the code changes it is describing, but it still doesn't link back to anything the team or company is actually trying to achieve, and that costs the reviewer even more time to check. AI proponents might say "well they should have fed the team OKRs and company mission/vision/values into the LLM for context" but then that defeats the point of having the code review in the first place. If the output is performative and not instructive, then the whole process is a waste of time.
I am not sure what the solution is, although I do think that this is not a problem that started with LLMs, it's just an evolution of a challenge we have always faced - how to deal with colleagues who are not really engaged.
My experience with LLM-generated summaries is the same as it was with the templates: many complete them in a way that is entirely self-referential and lacking in context. I don't need a comment or a summary to describe to me exactly the same thing I could have already understood by reading the code. The reason for adding English-language annotations to source code is to explain how a particular change solves a complex business problem, or how it fits into a long-term architectural plan, that sort of thing. But the kinds of people who already did not care about that high level stuff don't have the context to write useful summaries, and LLMs don't either.
The worst thing I've seen recently is when you push for more clarity and context on the reasons behind a change, and then that request gets piped into an LLM. The AI subsequently invents a business problem or architectural goal that in reality doesn't exist and then you get a summary that looks plausible in the abstract, and may even support the code changes it is describing, but it still doesn't link back to anything the team or company is actually trying to achieve, and that costs the reviewer even more time to check. AI proponents might say "well they should have fed the team OKRs and company mission/vision/values into the LLM for context" but then that defeats the point of having the code review in the first place. If the output is performative and not instructive, then the whole process is a waste of time.
I am not sure what the solution is, although I do think that this is not a problem that started with LLMs, it's just an evolution of a challenge we have always faced - how to deal with colleagues who are not really engaged.