I was merely pretty sure that the comment was AI generated as I read it. After reading it, I became a lot more confident when I noticed the username above the comment: Gemini 3.
Is this a Wordpress plugin the blog author is using?
Amazing that we now live in a world where AI can instantly an accurately diagnose a network infrastructure problem, but you are still forced to talk to CS drones who tell you again and again "have you tried unplugging it and plugging it back in?"
Going off on a tangent now, but man, I wish when you called support you could go through a quick technical competency test, and your results dictated who you got support from.
Nailing dense questions about network infrastructure? You get to the engineering team.
Failing to know what the "G" in 2.4GHz means? You probably just need someone to tell you to restart your router.
Our city fiber support is awesome and I've had luck in the past with telling the frontline tech that I likely needed to talk to one of the senior techs, and getting immediately escalated. I don't remember the exact problem I was having, but it was something slightly tricky that I had done extensive investigation on before calling.
I can't speak to the accuracy of the diagnosis, but the claims about NTP are bizarre, and to the best of my knowledge, wrong. There's nothing specific about the times the incidents cluster around that would have anything to do with NTP. It doesn't work like that.
>The precision of outages (at :29 and :44) matches a network-synchronized clock (NTP).
I think this just correctly points out that if the trigger was something unsynchronized like animals chewing on wires or someone digging underground, you wouldn't have 61% of events occurring at these two second markers. Even if the trigger was something digital but on a machine that isn't NTP synchronized, you would eventually have enough clock drift to move the events to other seconds. 61% combined at two markers (exactly 15 seconds apart) strongly suggests synchronized time.
That’s more about enhancing shareholder value than anything else. The MBA’s need to cut those costs and keep them down in order to get their promotions and bonuses. The CEO needs another yacht by the way.
Sometimes, the truth can be a letdown. Everyone was hoping it was LLMs all the way down. ;)
I am very fortunate to have two competing ISPs in my area. Verizon Fios, and Optimum Fibre. I have played them against each other. I have had both, over the years. I am currently using Optimum.
Still not especially cheap, but the service is good. The customer service ... not so good (think South Park).
Certain patterns are much more common in LLM output than in human writing. I'm a journalist and love an em dash, for example, but I've never met/read another journalist that uses them nearly as often as LLMs. Same with the "this isn't just X, it's Y" pattern. When you have multiple of these patterns in every paragraph, it's a pretty clear indicator that the text is AI-generated.
Plus, the author admitted to using AI to write it.
One of the little tics I've noticed that helps weed out and LLM generated text is to CTRL+F and look for the word "therefor" in any of it. LLMs will use the word in a new sentence that isn't the conclusion of any previous sentences or paragraphs. Think like, "Bees are small fuzzy and yellow. Therefor their ability to fly is an astounding achievement." In all of my years of reading I've never seen people use the word that much in common writing, and when they do it's usually as part of a compound sentence. These things really do have their own little set of semantics and dialect that they follow that seems like it's a unique quirk.
I'll second that, this is extremely annoying and exhausting.
It feels like the slightest occurrence of a less-than-ubiquitous pattern or any word not regularly used by the majority of the population instantly spawns a sleuth of newfound linguists who'll pitch in to explain how this certain marker ought to be proof of AI origin.
This does nothing for the conservation, except helping the claim that AI will erode and dumb down our language become a self-fulfilling prophecy when people start feeling pressured to use the most dumbed down, simplistic and rhetorically bland way of expressing themselves to avoid any "suspicion"
Not necessarily, the LLMs used today are far from just simple models of written information on the internet. They use in-house data they wrote themselves, and RLHF/DPO where it's effectively training on its own data to optimize for human preference. If sampling with high enough temperature for this, it could theoretically bring out entirely new unseen forms of speech as long as people express their preference for it via the user interface
I don't think anyone's here to debate the origin of speech patterns these things are using. Feels clear to me at least that the guy you're replying to is uninterested in reading stuff generated by AI, I can't say I disagree with him.
No, that’s in literally every LLM generated response to a forum message I’ve seen. It’s so common as to have become a trope. That’s not confidence. It’s a clear indicator of AI.
I’m very much uninterested in reading AI generated content. Your assertions seems to be “AIs only write like that because people have been writing like that”, but that’s not a great argument.
It feels like AI has suddenly given a platform for people who previously were unable to properly write blog content. But it immediately feels unoriginal and generic.
I’m just not interested in that type of content and immediately put off by it.
The only reason I mentioned this is because of the comment about Gemini 3 being in the comments.
I’m just really, really tired of all the AI content everywhere nowadays and crave some authenticity.
It just feels like cheap remakes / imitations to of original content.
For the most part LLMs choose "the most common" tokens; so regardless of whether the content was "AI content" or not, maybe you are getting tired of mediocrity.
And of course also that mediocrity has now become so cheap that it is now the overwhelming majority.
This is similar to how the average number of children per household is 2.5, but no one has 2.5 children. The most common tokens actually yield patterns that no one actually uses together in practice
LLMs have the tendency to really like comparisons / contrasts between things, which is likely due to the nature of neural networks (eg “Paris - France + Italy” = “Rome”). This is because when representing these concepts as embeddings, they can be computer very straightforward in vector space.
So no, it’s not all due to human language, LLMs do really write content in a specific style.
One recent study also showed something interesting: AIs aren’t very good at recognizing AI generated content either, which is likely related; they’re unaware of these patterns.
Was present, so what? It was 1 in 1 million, now it's 999999999 in 1 million. It is perfectly valid getting really tired of it, in fact, this is exactly what "getting really tired of" means and has always meant.
I'm not a data analyst. Almost everything pertaining to data analysis of the log is perplexity labs.
I'm also not a journalist and the article I wrote didn't sound professional and was too long. So I had AI change it to have a professional tone and structure and then edited it.
I'm also not an artist and I had AI generate a picture of a bear reading a newspaper. Then I used krita to remove the background and make it transparent.
I also asked the AI to generate 10 headlines, it gave me this one:
How a monopoly ISP weaponizes support incompetence against technical customers
Calls out systemic issue, appeals to HN's anti-monopoly sentiment
Then I changed it to:
How a monopoly ISP refuses to fix upstream infrastructure
Yes I leveraged expertise from three fields outside of my skillset to simplify a task, bounce back ideas, and conclude with a superior end result. It was demonstrably effective and it would have been stupid to spend 4x the effort to receive zero traction.
If you are not a journalist, and AI is your editor, then you should remove the statement on your site that calls itself a newspaper. Newspapers have journalists and editors.
Though you did your original message a disservice. Now we are left wondering how forthcoming, honest and friendly you were with that support staff. I'd also try to cheap out if I'd have to deal with a rude and/or dishonest customer. I'm not saying you were, but it's hard for us to know if you throw things at us like "why should I care?" You need to understand that this causes certain reactions.
I personally don't mind at all that you used AI to make your writing more accessible. To the contrary, I think it's a very suitable use of the tool and I would do the same.
But don't you realize what impression you are conveying to the audience here by being so strongy defensive? To the point of lashing out at bystanders like me? That's exactly what makes people wonder how you interacted with the company that you are so strongly (and likely rightfully) criticising.
To answer your question, by "owning up" I meant admitting to using AI for the text after initially denying it. Again, no judgement on my end for having used it. Apologies if my choice of the term implies a judgement to you. That wasn't my intention.
That’s literally what using an AI to write content is. The fact that you’re not seeing that and are resorting to these kind of ridiculous comments says enough.
Sorry, but the semantic constructs used in the article, the em-dashes, the use of ± signs that nobody ever uses, the typical bold formatting, the fact that your brother posts AI generated comments on your blog, it’s just too much.
I don’t blame you for it, writing a blog post like this without AI takes days of work. But at least own up to it, instead of now playing innocent that you didn’t lie.
No you didn’t lie, you omitted the part you also used AI to write the copy when confronted.
> the em-dashes, the use of ± signs that nobody ever uses
It is a bit saddening that correct punctuation is now a sign of dishonesty. I use em dashes and keep seeing people say it's such a dead giveaway. Now it's also ± which I also use. Are multiplication signs instead of "x" next? Or degree symbols? What can I still use if I want people to not think I'm too dumb to write my own text
You mention more signs (semantics, choice of what to embolden) but they're not binary signs (present → it's generated) and basically guesswork
What route dude I made an article to bring attention to the issue and the attention has been brought. Why do you want me to do things your way? I do things my way just fine. There's always someone to tell you that you should do this or that. Why should I do this or that? You do this and that if that's whats important to you. And if you want me to do this and that then convince me why I should.
You shouldn't lie about your work because that harms society (if you care about us) and society also will punish you (if you care about yourself). If you don't care about yourself or the people around you, then I have no idea why you should care.
I hate this aspect of HN, on other websites I can just block these types of sociopaths/trolls at their first message. But here I end up wasting my time and energy or I'll look bad.
Exactly. If the author says "parts of the article was written by AI", that's probably it, nobody will waste any more time on this because this is what the author has decided to do.
But instead the author seems to hide the fact of using AI under the discussion where some people find it distasteful to use AI. That doesn't help.
my internet has been broken for 17 months but you're more upset about me using ai to make my sentences sound professional than about comcast refusing to fix their infrastructure
Is this a Wordpress plugin the blog author is using?