One issue is that AI skews the costs paid by the parties of the communication. If someone wrote something and then I read it, the effort I took to read and comprehend it is probably lower than the author had to exert to create it.
On the other hand, with AI slop, the cost to read and evaluate is greater than the cost to create, meaning that my attention can be easily DoSed by bad actors.
That would be the best case outcome for some, and even that is a horribly bad outcome. But the vast majority of people would get DDOSed, scammed, misled by politicians and political actors etc. The erosion of trust just by humans being intellectually dishonest and tribal is already bearing really dark fruit.. covering the globe in LLM slop on top of that will predictably make it much worse.
Not that erosion of trust, an erosion of trust. Big difference.
But yes, an erosion of trust was already there, just like there was never perfect trust, and like even in the worst hellscape humans can physically maintain "there will always be some trust left, somewhere". All that is true, but also doesn't say much.
Erosion of trust is also not something that just happens or "is here now", it's a description of a living process after all, between humans and groups of them, and you can reverse it with honesty. Erosion and regrowing of trust happens all the time, you might say. It takes time, kinda like reversing erosion and planting things takes longer than erosion and cutting them down, but so what.
The bizarre part is the first panel in the comic! I'm not sure where people get the idea that they need to fluff up their emails or publications. It exists, sure, I'm just saying I've never felt the need to do it, nor have I ever (consciously, of course) valued a piece of text more because it was more fluffy and verbose. I do have a bad habit of writing over-verbosely myself (I'm doing it now!), but it's a flaw I indulge in on my own keyboard. I use LLMs plenty often, but I've never felt the need to ask one to fluff up and sloppify my writing for me.
But I really want to know where the idea that fluffier text = better (or more professional?) comes from. We have plenty of examples of how actual high-up business people communicate, it's generally quick and concise, not paragraphs of prose.
Even from marketing/salespeople, I generally value the efficient and concise emails way more than the ones full of paragraphs. Maybe this is an effect of the LLM era, but I feel like it was true before it, too.
This is partly what left me to leave a job. Coworkers would send me their AI slop expecting me to review it. Management didn’t care as it checked the box. The deluge of information and ease to create it is what’s made me far more sympathetic to regulation.
On the other hand, with AI slop, the cost to read and evaluate is greater than the cost to create, meaning that my attention can be easily DoSed by bad actors.