Hacker News new | past | comments | ask | show | jobs | submit login

I empathize with the obsession (we all have some obsessive behaviors we’re not thrilled with) but I question the utility.

It feels like some kind of negative appeal to authority: if the words were touched by an AI, they are less credible, and therefore it pays to detect AI as part of a heuristic to determine quality.

But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

IMO human content is so variable in quality that it is incumbent on readers to evaluate based on content, not provenance. Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality doesn’t seem healthy or productive at all.




I would rather see the errors a non naive speaker would make rather than wading though grammatically correct but generic, meaningless generated business speak in an attempt to extract meaning. When you sound like everyone else you sound like you have nothing new to say, a linguistic soviet union: bland, dull, depressing.

I think there's a bigger point about coming across as linguistically lazy--copying and pasting text without critiquing it akin to copying and pasting a stackoverflow answer--which gives rise to possibly unfair intellectual assumptions.


Your comment reminded me of an account I saw in a niche Reddit sub for an e-reader brand that posted gigantic 8 paragraph "reviews" or "feedback for the manufacturer" with bullet points and a summary paragraph of all the previous feedback at the end.

They always had a few useful observations but it required wading through an entire monitor's worth of filler garbage that completely devalued the time/benefit of reading through something with that low of information density.

It was sad because they clearly were very knowledgeable but their insight was ruined by prompting ChatGPT with something like "Write a detailed, well formatted formal letter addressed to Manufacturer X" that was completely unnecessary in a public forum.


I feel the need to paraphrase the Ikea scene in Fight Club: "sentences with tiny errors and imperfections, proof that they were made by the honest, simple, hardworking people of... whereever"


Non native speakers may not want to make errors. I want to post grammatically correct comments. This is even more true for texts that have my real name. It's not just about the receiver.


My boundaries are absolutely only about me. Using spell check is one thing, but if you outright can't write without using an LLM prompt then no, I don't want to read it thinking a person wrote it. If that doesn't catch on, I'd sooner move to a whitelist approach or stop reading altogether than be forced to read it.


You seriously don’t want to read content that was hand-written in another language and translated by LLM? That seems extremely parochial.


I am seeing this on the OpenStreetMap forums, which are an international affair, and it really annoys me. We get well-meaning mappers who join a thread in a language not their own (in case something is discussed within a national community) using LLM-translated posts.

For Dutch, this is extremely annoying¹. It's not that you can't translate to and from Dutch, it's that you will not pick up the nuances in the text written by people with a decent proficiency in Dutch (like the way written and spoken Dutch can be really rather direct, which can translate to quite impolite English, and really improper German), and technical and domain-specific content (e.g., traffic regulations) gets butchered.

I much rather see someone responding to a Dutch thread do so in English if they can't write Dutch, because then at least I can see if the translation from Dutch is going wrong somewhere, instead of having to figure out why that person isn't making sense by going through two passes of an LLM… Been there, done that. Besides, if I'm replying I can do so in English too, and avoid having LLMs mangle my words.

So yes, I too abhor having to deal with any form of communication where an LLM sits between the other person and myself. I find it exceedingly rude.

1: For other languages too, but as a native Dutch speaker this one is easy for me to see.


I said "LLM prompt" because I meant just that, a prompt. I described my stance on it more thoroughly here: https://news.ycombinator.com/item?id=42829068


I absolutely do not want to read that. I want google to stop sending me that. Either it’s written in French or English and I can read it directly, or it’s written in another language and I can ask for automatic translation myself, but do not lie to me about who wrote it and in what language.

I’m so tired of translation slop. I live in France, and when I search for building related stuff in French I have to wade through pages of translation slop to find stuff written with the actual building standards and codes in mind. Avoiding sales pitch, AI, and translation slops is getting really tiring when you’re looking for contextualized expert knowledge.


I don't want to read anything generated by an LLM, now or ever.


I am trilingual. Sometimes, Google would auto-translate their docs into the local language, despite my browser and account language being set to English. I hate this. Monolingual people may not fully grasp how much languages differ in the exact details of how you write; a translation will always alter the text and when done without a human mostly rewriting the entire thing by hand, it would make it more confusing, meandering, and unpleasant.


This nicely sums up my distaste for the recent Lex / Zelenskyy interview. I feel like the auto-translation was a mistake, and I would have preferred anything else.


If non-native speakers (including myself, fwiw) want to post grammatically correct comments, there's a fairly straightforward solution: learn grammar and use a spell/grammar checker. Have the courage to write your own words and the decency to spare the rest of us from slop.


People who depend on LLMs to polish their words will run into the same problem as people who rely on autocomplete functionality: their language skills will suffer.

There's nothing wrong with using tools to check written text, but I'm wary of blindly accepting any suggested fixes. If I see a red underline I'll consider whether the word is actually fine first (English is not a static language, and spelling dictionaries are not complete), and if it looks wrong I'll try fixing it myself before reaching for the suggested fix.


Then either you edit the results as suggested in TFA or those comments are in fact not yours. Grammatically correct or otherwise.


Comments that have been translated are yours or at least widely regarded as such. An aversion to AI isn't going to change that.


In the US at least, translators own the copyright of their translation. That is to recognize the complexity of translating meaning and not just words from one language to another.


Sure, but if you ask almost anyone who wrote a work of fiction or whatever that was translated, they mention the author, the translator often not even coming into the picture at all. Ultimately, most people don't really care about translators, complex job or not.


Definitely. I'm not saying it's solely the work of the interpreter (clearly not), but it is a significant intellectual contribution. I do not think this contribution has remotely been made obsolete by artificial translation.


I tentatively agree - if the core idea buried within the text is unique enough then I'm not sure I care how much the text has been laundered. But that's a big IF.


Not quality. Accountability.

I work in (okay, adjacent to) finance. Any communications that are sent / made available to people outside your own organisation are subject to being interpreted as legally binding to various degrees. Provenance of any piece of text/diagram is vitally important.

Let's pair this with a real life example: Google's Gemini sales team haven't understood the above. Their splashy sales pitch for using Gemini as part of someone's workflow is that it can autogenerate document sections and slide decks. The idea of annotating sections based on whether they were written by a human or an unaccountable tool appeared entirely foreign to them.

(The irony is that Google would be particularly well placed to have such annotations. Considering the underlying data structures are CRDTs, and they already show who made any given edit, including an annotation whether the piece of content came from a human or bot should be relatively easy.)


I don't understand this argument. There is accountability: the user or management is always possible to blame.

Say one of my tasks is writing a document, I use a LLM and it tells people to eat rat poison.

But I'm accountable to my boss. My boss doesn't care a LLM did it, my boss cares I submitted something that horrible as completed work.

And if my boss lets that through then my boss is accountable to their boss.

And if my company posts that on the website, then my company is accountable to the world.

Annotations would be useful, sure. But I don't think for one minute they'd release you from any liability. Maybe they don't make it into the final PDF. Or maybe not everyone understands what they're supposed to take away from them. You post it, you'll be held responsible.


Hm, we may be using the word in slightly different tones then. For me accountability is more than just appointing blame, it's also about how you got to the result you brought out.

On the other hand, I absolutely agree with this:

> And if my company posts that on the website, then my company is accountable to the world.

I take pride in having my name associated with material we post publicly. It doesn't make my employer any less involved in it, but it does mean we both put out necks out. The company figuratively, and me personally.


<< my boss cares I submitted something that horrible as completed work.

Bosses come in many shapes and sizes. That said, some of the bosses I had usually wanted it all ( as in: LLM speed, human insights, easy to read format, but also good and complete artifact for auditors ). And they tended to demand it all ( think Musk ) as a way of managing, because they think it helps people work at their highest potential.

In those instances, something has got to give.


Ideally, yes, sadly examples abound with excuses like "the machine did it" or "the machine doesn't seem to allow me to do what you are asking for due to either my own incompetence, those of engineers that fabricated it it, or my organization's policy, so I'm going to pretend it's impossible (even though it would be possible to do it by hand)".


One issue is that AI skews the costs paid by the parties of the communication. If someone wrote something and then I read it, the effort I took to read and comprehend it is probably lower than the author had to exert to create it.

On the other hand, with AI slop, the cost to read and evaluate is greater than the cost to create, meaning that my attention can be easily DoSed by bad actors.



That would be the best case outcome for some, and even that is a horribly bad outcome. But the vast majority of people would get DDOSed, scammed, misled by politicians and political actors etc. The erosion of trust just by humans being intellectually dishonest and tribal is already bearing really dark fruit.. covering the globe in LLM slop on top of that will predictably make it much worse.


To be perfectly honest, that erosion of trust is already here.


Not that erosion of trust, an erosion of trust. Big difference.

But yes, an erosion of trust was already there, just like there was never perfect trust, and like even in the worst hellscape humans can physically maintain "there will always be some trust left, somewhere". All that is true, but also doesn't say much.

Erosion of trust is also not something that just happens or "is here now", it's a description of a living process after all, between humans and groups of them, and you can reverse it with honesty. Erosion and regrowing of trust happens all the time, you might say. It takes time, kinda like reversing erosion and planting things takes longer than erosion and cutting them down, but so what.


The bizarre part is the first panel in the comic! I'm not sure where people get the idea that they need to fluff up their emails or publications. It exists, sure, I'm just saying I've never felt the need to do it, nor have I ever (consciously, of course) valued a piece of text more because it was more fluffy and verbose. I do have a bad habit of writing over-verbosely myself (I'm doing it now!), but it's a flaw I indulge in on my own keyboard. I use LLMs plenty often, but I've never felt the need to ask one to fluff up and sloppify my writing for me.

But I really want to know where the idea that fluffier text = better (or more professional?) comes from. We have plenty of examples of how actual high-up business people communicate, it's generally quick and concise, not paragraphs of prose.

Even from marketing/salespeople, I generally value the efficient and concise emails way more than the ones full of paragraphs. Maybe this is an effect of the LLM era, but I feel like it was true before it, too.


This is partly what left me to leave a job. Coworkers would send me their AI slop expecting me to review it. Management didn’t care as it checked the box. The deluge of information and ease to create it is what’s made me far more sympathetic to regulation.


Which is exactly the same problem as with spam.


Oddly enough, LLM generated text is going to be far less likely to sound like a non-native speaker writing though, is the thing. Once you sort of understand the differences in grammar rules, or just from experience, certain types of non-native english always have a feel to them which reflects the mismatch between two languages - i.e. Chinese-English rough translations tend to retain the Chinese grammar structure and also mix up formalisms of words.

LLM text just plain doesn't do this: they're very good at writing perfectly formed English, but it just winds up saying nothing (and models like ChatGPT have been optimized so they end up having a particular voice they speak in as well).


> certain types of non-native english always have a feel to them which reflects the mismatch between two languages

This. My partner always speaks frenglish (french english) after talking to her parents. You have to know a little French to understand her sentences. They’re all English words, but the phraseology is all French.

I do the same with Slovenian. The words are all English, but the shape is Slovenian. It adds a lot of soul to your words.

It can also be topic dependent. When I describe memories from home in English, the language sounds more Slovenian. Likewise when I talk about American stuff to my parents, my Slovenian sounds more English.

ChatGPT would lose all that color.

Read Man In The High Castle to see this for yourself. Whole book is English but you can tell the different nationalities of each character because the shape of their English changes. Philip K Dick used this masterfully.


> Whole book is English

Amusingly, I think this phrase illustrates your point. To the best of my knowledge, a native speaker (which I'm not) would always say "The whole book is (in?) English", leaving off articles seems to be very common for Slavic people (since I believe you don't really have them in your languages).


leaving off articles seems to be very common for Slavic people

Whenever I come across text that has a lot of missing articles, the voice inside my head automatically changes to a Russian accent; and in the instances where I've bothered to find out the author, it was always someone from Russia or some other ex-USSR country, so it seems I've already ingrained this characteristic at a subconscious level.


Poles, Czechs etc. also do this and IMHO, their accent sounds quite different from the Russian one.


I think this is more about formality and modern usage. I'm nearly 50 and am British. I sometimes write in this abbreviated form, omitting things like articles when they are unnecessary. Especially in text messages, social media posts, etc.


I used to work in academia with a Chilean guy who added extra articles where they weren’t needed and a Slovakian guy who didn’t put any in at all. I had fun editing the papers we wrote!


Spanish has definite and indefinite articles like English, so at least the concept is not unknown. However, even then, the correct usage is sometimes really arbitrary and varies across languages, e.g. why is it typically "mankind" and not "the mankind" (by contrast, in German it's "die Menschheit", with an article)?


It also helps refute the point because you could certainly ask an LLM to speak as though they’re a character from the book.

And if what it does now is unimpressive, it might be a good thing to use to monitor the rapid progress of LLMs.


Just to corroborate as a native English speaker, yes, in my experience the "the" would only be left off in quite informal registers or in haste.


There is sure to be lots of training data from people with French as a first language and English as a second language that can be pulled up with some prompting.


LLM certainly does write perfectly grammatical and idiomatic English (I haven't tried enough other languages to know if this is true for, say, Japanese, too). But regular people all have their own idiosyncratic styles - words and turns of phrases they like using more than others, preferred sentence structures and lengths, different levels of politeness, deference and assertiveness, etc.

LLM output to me usually sounds very sanitised style-wise (not just content-wise), some sort of lowest-common-denominator language, which is probably why it sounds so corporate-y. I guess you can influence the style by clever prompt engineering, but I doubt you'd get a very unique style this way.


I have successfully gotten ChatGPT to copy a Norwegian artificial sociolect spoken by at most a few hundred people that it wouldn't admit to even knowing (the circle using it includes a few published authors and journalists, so it's likely there's some content in it's training data, but not much) by describing the features of it, so I think you might be surprised if you try. Maintaining it through a longer conversation might prove a nuisance, though.


In case it ever needed to be said, yes they do generate idiomatic languages but do sound translated corporatese in Japanese. Considering that there are no purely $LANG trained viable LLM other than for LANG=`en_US`, I suspect there's something (corporate)English specific in LLM architecture that only few people in the world understand.


You can definitely attempt to impose "in the style of X" - or if you have original samples you can try to provide them as stylistic sample data.

But realistically, how many people are going to actually do that? Communication fed through an LLM represents a rather bleak linguistic convergence.


> some sort of lowest-common-denominator language

LLM's output the statistically most average sequence of tokens (there's no intelligence there, "artificial" or otherwise), so yeah, that's by design.


It can emulate a bad English speaker if prompted to do that.


Yes, there's enough explicitly tagged bad English in the training dataset to make a valid average approximation.


No. Not explicitly tagged. They are initially trained on vast amounts of data which are not tagged.

You fundamentally misunderstand how this works.

The LLMs learn the various grammars and "accents" implicitly. They automatically differentiate these grammars.

Sounds like you still have this idea that LLMs are a giant Markov chain. They are not Markov chains.

They are deep neural networks with hundreds of layers and they automatically model relations at extremely deep levels of abstraction.


The context is the explicit tagging in this case. You don't need to understand language to detect English-as-a-second language speakers. (Indeed Markov chains will happily solve this problem for you.)

> they automatically model relations

No, they do not model anything at all. If you follow the tech bubble turtles all the way down you find a maximum likelihood logistic approximation.

I know, I know - then you'll do a sleight of hand and claim that all intelligence and modeling is also just maximum likelihood, even thought it's patently and obviously untrue.


It's literally a model.

Large Language Model (LLM).

Hundreds of layers with a trillion weights and you think "nothing is modelled" there. The comments on this site are ridiculous.

Studies have traced individual "neurons" in LLMs that represent specific concepts. It's not even debatable at this point.


> Chinese-English rough translations tend to retain the Chinese grammar structure

Those would be _really_ rough translations. Yes, I've seen "It's an achieve my dream's place" written, but that was in an essay written for high school.


LLMs do whatever you ask them to. They have a default, but they can be directed to use a different response style.

And of course you could build a corpus of text written by Chinese English speakers for more authenticity.


> But… what if the writer just isn’t a native speaker of your language? Or is a math genius but weak with language? Or…

All of these could apply to those YouTube videos that have synthesized speech, but I'll bet most of us click away immediately when we find the video we opened is one of those.


Agreed. Same reason I don't envision TTS podcasts taking off any time soon - the lack of authenticity is a real turn off.


No, we clearly don't. They remain very popular.


> what if the writer just isn’t a native speaker of your language [...] evaluate based on content

Evaluate as in "monetize" everything and that's how we ended up in this commercialized internet. The old web was about diversity and meeting new people all over the world. I don't care about grammar mistakes, it makes us human.


I find grammatical mistakes in non-native speakers endearing. Either when they speak English and are non-native speakers of English (I am too), or when they speak my native language and they are not native speakers of mine.

Especially when it’s apparent that it comes from how you would phrase something in the original language of the person speaking/writing.

Or as one might say: Especially when it is visible that it comes of how one would say something on mother’s language to the person that speaks or writes.


I think the author does cover their bases there:

> To be clear, I fault no one for augmenting their writing with LLMs. I do it. A lot now. It’s a great breaker of writers block. But I really do judge those who copy/paste directly from an LLM into a human-space text arena.

When writing in my second language, I am leaning very heavily on AI to generate plausible writing based on an outline, after which I extensively tweak things (often by adversarial discussion with ChatGPT). It scares me that someone will see it as AI slop though, especially if the original premise of my writing was flimsy...


I hope the article didn't make you feel bad and discourage you from writing. IMO what you are doing is not slop, and the author saying "I really do judge those who copy/paste directly from an LLN to human-space text arena" is a pretty shallow judgement if taken at face value so I'm hoping it was just some clumsy wording on their part.

---

When the AI hype started and companies started shoving it the throats of everyone, I also developed this intense reflex of a negative reaction to seeing LLM-text, much like how the author said on the first paragraph. So much crappy start-ups and grifters, which I think I saw a lot because I frequented /r/localllama Reddit and generally followed LLM-related news so I got exposed to the crap.

Even today I still get that negative reaction from seeing obvious LLM-text but it's much a weaker reaction now than it used to be, and I'm hoping it'll go away entirely soon.

The reason I want to change: my attitude changed when I heard a lot more use cases kinda like you describe, people who really could use the help from an LLM. Maybe you aren't good with the language. Maybe you are insecure about your own ability to write. Maybe you aren't creative or articulate and you want to communicate your message better. Maybe you have 8 children and your life is a chaos, but you actually need to write something regularly and ChatGPT cuts out that time a lot. Maybe your fingers physically hurt and you have a disability and you can't type well. Maybe you have a mental or a brain problem and you can't focus or remember things or dyslexia or whatever. Maybe you are used to Google searching and now think Google results are kinda shit these days and a modern LLM is usually correct enough that it's just more practical to use. Probably way more examples I can't think of.

None of these uses are "slop" to me, but can result in text that looks like slop to people, because it might have easily recognizable ChatGPT-like tone. If you get judged over using AI as a helping tool (and you are not scamming/grifing/etc.), then judge them back for judging you ;)

Also, I'm not sure the definition of "slop" has an exactly agreed upon definition. I think of it as low-effort AI garbage, basically a use of LLMs as a misdirection. Basically the same as "spam" but maybe with a nuance that now it's LLM-powered. Makes you waste time. Or tries to scam or trick you. I don't have a coherent definition myself. The author has a definition near top of the page that seems reasonable but the rest of the article didn't feel like it actually followed the spirit of said definition (like the judging copy/paste part).

To give the author good faith: I think they maybe wrote thinking of a reader audience of proficiently English-speaking writers with no impediments to writing. Like assuming everyone knows how or can "fix" the LLM text with their own personal touch or whatever. Not sure. I can't read their mind.

I have a hope, that genuine slop continues to be recognizable: even if I get 10000x smarter LLM right now, ChatGPT-9000, can it really do much if I, as its user, continue to ask it to make crappy SEO pages or misleading Amazon product pages? The tone of the language with LLMs might get more convincing, but savvy humans should till be able to read reviews, realize a SEO page has no substance, etc. regardless how immaculate the writing itself is.

Tl;dr; keep writing, and keep making use of AI, I hope reading that sentence didn't actually affect you.


False positives aren’t a big problem. There’s more content than I have time to read and my tolerance for reading anything generated is zero. So it’s better to label too much human content as generated and risk ignoring something insightful and human generated.


Depending on the subfield it might not be true. It's also quite disheartening to find yourself in a social space where you realize that you are almost the only one human left (happened to me twice already).


> False positives aren’t a big problem.

You will think that until something your wrote with your own mind and hands is falsely accused of being AI generated.

“Sorry alkonaut, your account has been suspended due to suspicious activity.”

“We have chatgpt too alkonaut! No need to copy paste it for us”

“It is my sad duty to inform you that we have reasons to believe that you have commited academic misconduct. As such we have suspended your maintenance grant, and you will be removed from the university register.”


False positives must be zero in that context. Not when I choose which blog posts to spend 30 minutes on. Quite different.


Someone is writing the blog post. That is the person who cares if you mistake their hard work for AI slop.


Exactly. And in that context I really don’t care. Similarly if I wrote the blog post. People can leave after the first paragraph because it’s uninteresting. If they leave because it looks generated it’s the same thing. Writing in an interesting way is as skill. Writing in a ”human” way is probably quickly becoming a skill now too. But I think they were probably always closely related.


There are people who write to help illuminate others.

There are also people who use bots to write to raise the communication noise floor so no comprehension can occur.

Its all about cost and intent.


Content written by non-native English speaker will have some errors (usually). Content generated by ChatGPT4 will have no errors but will give feeling as if the person who was writing was compelled to puke more and more words


I wrote (dictated mostly, but still its my words) some comment that I will eventually post on Hacker News[0], then ran it through ChatGPT with a prompt not much more complicated that "rewrite this in the style of a great hacker news comment".

The result hurt. Not because it was bad, but because it was better than I could do myself or even hope to do myself eventually.

I am sure the comment would be upvoted more after it had been run through the AI than before..

[0]: it addresses a common misconception that shows up often, but each time I see it I don't have the time to write the proper reply. I am not trying to astro turf HN.


The assumption that AI is gonna perfectly fill the gaps in the language abilities of anyone with a good idea but poor communication tools to explain it feels naive, among other issues the more original and groundbreaking an idea might be the harder it will be for the machine to follow it, as it may deviate too much from it's training dataset.


I'm not a native speaker.

I've been accused of being AI often because of that. :(


You are right. There is very little utility.

These people are not domain experts, and they often latch onto structure or happenstance that is quite common (in the overall picture), and anything out of the ordinary they consider AI slop. Its a false justification loop, which breaks their perception.

Around the turn of the last century (1900-1940s), was a time where hyper-rationalism played an important role in winning WW2. Language use in the published works and in academia at that time had words with distinct meanings, they were sometimes uncommon words, but it allowed a rigorous approach to communication.

Today we have words which can have contradictory meanings in different contexts based in ambiguity, where the same word means two things simultaneously without further information. AI often can't handle figuring out the context in these cases, and often hallucinates, whereas the context can in some cases be clear to a discerning human reader.

I've have seen it more than a few times where people have misidentified these clear cut cases of human consistency, as AI generated slop. There is a lot of bias in perception that makes this a common issue.

In my opinion, the exercise of doing this as the article's author suggests, is simply fallacy following a deluded spiral to madness.

Communication is the sharing of a consistent meaning. Consistency plays a big role in that.

People can talk about word counts, frequency, word choices, etc, and in most cases its fallacy, especially when there is consistency in the meaning. They delude themselves, fueling a rather trite delusion that anything that looks different is in fact AI and not a real person.

It is sad that people can be so easily fooled, and false justification is one of the worst forms of self-violation since it warps your perception at a fairly low level.


> Using an author’s tools, or ethnicity, or sociowhatever as a proxy for quality

For me the rejection of it doesn't even depend on there being any author involved with it, it could just be running free so to speak.

And language is very close to the ability to think and to even see the world around us. To just poison that well nilly-willy because "it's hard" is not a great argument. It's hard because it matters, and that's why learning language and improving one's usage of it is rewarding.

Personally I view machine translation that happens in a process of communication, as part of an ongoing process between people or in a group (mathematicians) that involves feedback and clarification etc. as very different than using LLM to create static "content". We have been using DeepL and Google Translate long before any of this hype, and it was fine.

You asked, what if the writer isn't a native speaker of my language, but how would they even know my language? They only do in personal communication, in which case see above; and otherwise, I don't want to read it. That is, people should write in languages they know, because that's the only ones they can proofread. That's the only way they can make sure it's actually what they think it is. And others who are good at translating (be it software or a person) can translate it when needed. There is no need to destroy the original words and just have the translation, at least I have no need for that.

> If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.

-- George Orwell

Yes, this is correlated with privilege. Life is still not fair. Which we fix or at least improve by making a fairer world where everybody has access to education and medicine, not by pretending you can just fake the process by having something that statistically could have been an outcome of the process, had it taken place.

The poorest and most vulnerable people will suffer the most in a world where money and bandwidth alone can buy you what people think, what they see, what drowns out any human voice trying to reach other humans. This is what billionaires clamor for, not the average person, at all.


I have some anonymous accounts, and use AI to avoid being identified.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: