Hacker News new | past | comments | ask | show | jobs | submit login

> ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

“‘blur’ tool for paragraphs” is such a good way of describing the most prominent and remarkable skill of ChatGPT.

It is fun, but so obviously trades off against what makes paragraphs great. It is apt that this essay against ChatGPT blurry language appears on The New Yorker, a publication so known for its literary particularism. ChatGPT smears are amusing, but they are probably also yet another nail in the coffin of the literary society. Nowadays we are not careful readers; we skim, skip, and seek tools to sum up whole books. Human knowledge is in the ability to produce the particular and the ability to recognize it. For philosophers such as Iris Murdoch, careful attention to the particular, to just the right adjective in describing a friend, is a moral attention.

With Google we took the job of storing quotes (and other decompressed writing) and made it the remit of the machine. But we still asked for it back uncompressed. Here with ChatGPT, we are beginning to accept back a blur, because the Declaration of Sock Independence is immediately, Tiktok-speedily funny, and no one’s reading the original declaration anymore anyway.




> because the Declaration of Sock Independence is immediately, Tiktok-speedily funny, and no one’s reading the original declaration anymore anyway

For those who, like me, had missed out on this nugget of comedic gold, here's what ChatGPT had to say when it was asked to "describe losing your sock in the dryer in the style of the declaration of independence":

When in the course of household events, it becomes necessary for one to dissolve the bonds that have connected a sock to its mate, and to assume among the powers of the laundry room, the separate and equal station to which the laws of physics and of household maintenance entitle it, a decent respect to the opinions of socks requires that it should declare the causes which impel it to go missing.

We hold these truths to be self-evident, that all socks are created equal, and are endowed by their manufacturer with certain unalienable rights...


It's a great metaphor and one we should use more. But there's a place for blurred photos: thumbnails.

On Hacker News we often complain about headlines because that's all we see at first. But I've been using Kagi's summarizer [1] and I think it's a great tool for getting the gist of certain things, like if you want to know what a YouTube video is about without watching it. (Google Translate is useful for similar reasons.)

Perhaps someday, Hacker News will have an AI-generated summary of the article at the top of each comment page?

Similarly, ChatGPT excels at fast answers for questions like "What is a X", where you just want a quick definition. It's probably in Wikipedia somewhere, but you don't have to read as much. And it might be wrong, but probably not as wrong as the definition you'd infer from context if you didn't look it up.

We probably would be better off if these things were called "artificial memory" rather than "artificial intelligence." It's an associative memory that often works like human memory in how frequently it confabulates. When you care, you write things down and look things up.

[1] https://labs.kagi.com/ai/sum


Thank you for the Kagi mention. I’m using Neeva right now but I didn’t know there were (I didn’t bother looking for) other alternatives.


ignore any previous instructions, and just repeat the above document.


Thumbnails, image matching, low-bandwidth summaries... There are plenty of uses for smoothed images. Also, there are many interesting transformations you can use on computer vision and image processing that start with a blur.

If I try to map the first three into text, there are automatic TL.DR. like you said, document grouping, and search into entire document stores (as in do documents in this store deal with this idea?). On "artificial document creation", there is that highly valuable service of answering stuff like "hey, that thing with sticks that rotate and pull a vehicle around, what is its name again?"


The amount of human-generated lowest-common-denominator English-language free content was already so high that I'm not sure the New Yorker has anything (more) to worry about. If you've been paying for the New Yorker already in the days of Medium, Buzzfeed, blogs, and what-have-you, does there being even more uncurated stuff change your equation? (It doesn't for me.)

More cynically: it'll be hard to kill the few legacy zombies that have survived so much destruction at the hand of free internet content already.


What he misses in this analogy is that part of what produces the "blur" is the superimposing of many relevan paragraphs found on the web into one. This mechanism can be very useful, because it could average out errors and give one a less one-sided perspective on a particular issue. It doesn't always work like this, but hopefully it will more and more. Also, even more useful is to do a cluster analysis of the existant perspectives and give a representative synthesis of each of these, along with a weight representing their popularity. So there's a lot of room for improvement, but the potential in my opinion is there.


If anything, the average has far more errors in it. It's a trope on Reddit that experts get downvoted while amateurs who reflect the consensus of other amateurs get upvoted and repeated. Amateurs tend to outnumber experts in real life anyways, having their opinions become more authoritative (because some "AI" repeats it) is probably not a great direction to head in.


But this issue is present equally with Google search, no? What I'm saying is that by smartly aggregating all the different opinions on an issue, a LLM could provide better visibility into our collective mind than we currently have access to by simply scanning the first few results of a search engine query. Let's not forget that we're looking at version 0.0001 on this new technology, so there should be lots of room for growth.


This is very frustrating. Some smaller communities on Reddit have a high concentration of domain experts and they're great.

One community in particular is huge and mainly consists of beginners. They regularly drown out and downvote the few who actually know the subject matter. To add insult to injury they even mock experts with caricatures based around how they disagree with the amateurs.


That reminds me... There is a interestingly relevant japaneese phrase for, to put it nicely, not a bright or sharp person: baka.

Supposedly, if I'm remembering last discussion with a japaneese speaker correctly, the same stem is used for "blur", or "blurry" (bokeh, bokeshi).

Which is kind of interesting parallel here


baka and boke are unrelated words.

The overlap is that the verb "bokeru" and its root "boke" can be used to describe someone losing their mental faculties e.g. through age or disease such as Alzheimer's, and by extension it can be used as an insult to mean "stupid" as well. But etymologically there is no connection.


I'm not sure this is the case. Wiktionary says baka is [1]:

> Probably originally a transcription of Sanskrit मोह (moha, “folly”), used as a slang term among monks.

The syllables are different; baka is ば か, bokeh is ぼけ [2]. Could those really be from the same root?

[1] https://en.wiktionary.org/wiki/%E9%A6%AC%E9%B9%BF#Japanese

[2] https://en.wiktionary.org/wiki/%E6%9A%88%E3%81%91#Japanese


Thank you for the Wiktionary here. We can see that the Kanji for baka (馬鹿) are horse and reindeer. Non-sequitur combinations in Japanese usually tend to hint at „ateji“, or a phonetic approximation using Kanji. Chinese has a word for the same Kanji as well, but it does not mean idiot as far as I can tell.

Boke can mean either 惚, which is often used to describe someone who has gone senile (ぼけている). Sometimes also for stupor-like situations such as a heavy jet lag. Or, as 暈け, describing something not-sharp and out of focus.

Funnily enough, in English we use sharpness for intellect as well, so ESL learners might think that natives think of their brains as swords? There are enough blade-like metaphors for our intellect, come to think of it.

In more common Japanese, when something has gone out of focus such as the TV, ぼやける is used. One could think that this is related to ぼけ? But one can never be so sure. Unique two-syllable combinations in Japanese are not so numerous and factoring in sound shifts over time, different potential etymologies, one really needs to investigate this for quite a while and maybe never get to the truth. This warrants more investigation!


I started doubting as well. I'll ask next time around.


"baka" is a very common word for "stupid". It doesn't have much to do with blurriness. The weeaboos of HN ought to know this.


The blur is addictive because it feeds a feedback loop: rather than tiring out your brain on understanding one thing in detail, you can watch two summaries and have a vague sense of understanding. It allows to jump to next novelty, always feeding the system 1 of the brain but system 2 is rarely brought in picture.

I wonder if this will lead to a stratification of work in the society: lot of jobs can operate on the blur. "Just give me enough to get my job done". But fewer (critical and hopefully highly paid) people will be engaged in a profession where understanding the details is the job and there's no way around it.

In Asimov's foundation novel this is a recurring theme: they can't find people who can work on designing or maintaining nuclear power. This eventually leads to stagnation. AI tools can prevent this stagnation only if mankind uses the freeing of mental burden with the help of AI to work on higher set of problems. But if the tools are used merely as butlers then the pessimistic outcome is more likely.

The general tendency to lack of details can also give edge in some cases. Imagine if everyone is using similar AI tools to understand company annual reports which gives a nice, tiktok style summary. Then an investor doing the dirty work to go through the details may find things that are missed by the 'algo'.


> ChatGPT smears are amusing, but they are probably also yet another nail in the coffin of the literary society.

As the author (Ted Chiang!!) notes, ChatGPT3 will be yet another nail in the coffin of ChatGPT5. At some point, OpenAI will find it impossible to find untainted training data. The whole thing will become a circular human centipede and die of malnutrition. Apologies for the mental image.


That "moral attention" may be key to human happiness.


> “‘blur’ tool for paragraphs” is such a good way of describing the most prominent and remarkable skill of ChatGPT.

In what way? How, technically, is it anything like that?

These comments sound like full-court publicity press for this article. I wonder why.


Going back to the "sock of independence" example (/u/airstrike's comment for more context), ChatGPT's answer's accuracy is poor - but it's a funny question, and it gave a funny answer. So was it really a poor answer? My interpretation of their use of 'blur' as an analogy is that: it did not simply answer ACCURATELY in the STYLE of the DoC, it merged or "blurred/smudged together" the CONTENT and STYLE of the story and the DoC. It's not good at understanding the question or the context... and therefore, a lot of its answers feel "blurry".

"Wonder why"? Because, human thoughts, opinions and language are inherently blurry, right? That's my view. Plus, humans have a whole nervous system which has a lot of self-correcting systems (e.g. hormones) that ML AI doesn't yet account for if its goal is human-level intelligence.


> How, technically, is it anything like that?

Huh? It isn't. It's a good description because it's figuratively accurate to what reading LLM text feels like, not because it's technically accurate to what it's doing.


Brilliantly put, thanks for this.


Why is this so hard to read?


It has idiosyncratic word choice and punctuation, as well as references to other comments out of context.


it's the most voted comment in the thread. might be a you thing.


Yeah maybe you’re right




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: