Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Open-source Zig book (zigbook.net)
500 points by rudedogg 11 hours ago | hide | past | favorite | 193 comments




> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.

I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language.


There is nothing new under the Sun. However, some languages manifest as good rewrites of older languages. Rust is that for C++. Zig is that for C.

Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development.

Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL.

On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is.


Much of the book's copy appears to have been written by AI (despite the foreword statement that none of it was), which explains the hokey overenthusiasm and exaggerations.

For those who actually want to learn languages which are "fundamentally changing how you think about software", I'd recommend the Lisp family and APL family.

I'd also throw Erlang/Elixir out there. And I really wished Elm wasn't such a trainwreck of a project...

The gratuit accusations in this thread should be flagged.

This looks fantastic. Pedagogically it makes sense to me, and I love this approach of not just teaching a language, but a paradigm (in this case, low-level systems programming), in a single text.

Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language.

But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through.


So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you.

I am just impressed by the quality and details and approach of it all.

Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years)


Because site site explicitly says:

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts.


> Why does it matter?

Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation.


But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes.

If it was so obviously written by AI then finding those mistakes should be easy?


The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught.

Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step?


Humans get things wrong too.

Quality prose usually only becomes that after many reviews.


AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make.

If AI is used by “fire and forget”, sure - there’s a good chance of slop.

But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome.


Absolutely.

But why would you trust the author to have done that when they are lying in a very obvious way about not using AI?

Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting.


Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_

AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous.

Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output.

A serious one yes.

But why would a serious person claim that they wrote this without AI when it's obvious they used it?!

Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work.


That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review.

That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal.

Because the first thing you see when you click the link is "Zero AI" pasted under the most obviously AI-generated copy I've ever seen. It's just an insult to our intelligence, obviously we're gonna call OP out on this. Why lie like that?

It's funny how everyone has gaslit themselves into doubting their own intuitions on the most blatant specimen where it's not just a mere whiff of the reek but an overpowering pungency assaulting the senses at every turn, forcing themselves to exclaim "the Emperor's fart smells wonderful!"

    “The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”

It matters because, it irritates me to no end that I have to review AI generated content that a human did not verify before. I don't like being made to work in the guise of someone giving me free content.

An awful lot of commenters are convinced that it's AI-generated, despite explicit statements to the contrary. Maybe they're wrong, maybe they're right, but none of them currently have any proof stronger than vibes. It's like everyone has gaslit themselves into thinking that humans can't write well-structured neutral-tone docs any more.

> That’s like saying you used a calculator to calculate your equations so I can’t trust you.

A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship.

That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software.


If you believed that you wouldn't explicitly say there was no AI generated content at all, you'd let it speak for itself.

>That’s like saying you used a calculator to calculate your equations so I can’t trust you.

No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless.

These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator.

As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage.

https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


I value human work and I do NOT value work that has been done with heavy AI usage. Most AI things I've seen are slop, I instantly recognize AI songs for example. I just dont want anything to do with it. The uniqueness of creative work is lost with using AI.

Insecurity, that's why.

I too have this feeling sometimes. It's a coping mechanism. I don't know why we have this but I guess we have to see past it and adapt to reality.


> [Learning Zig] is about fundamentally changing how you think about software.

Learning LISP, Fortran, APL, Perl, or really any language that is different from what you’re used to, will also do this for you.


I'd add Prolog to that list; but Fortran and Perl aren't all that different from other procedural languages.

Very well done! wow! Thanks for this. Going through this now.

One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read.


>Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.

Zig is just C with a marketing push. Most developers already know C.


That tagline unfortunately turned me off the book, without even starting to read.

I really don't need this kind of self-enlightenment rubbish.

What if I read the whole book and felt no change?

I think I understand SoA just fine.


It is also just such a supremely unziglike thing to state.

Early talks by Andrew explicitly leaned into the notion that "software can be perfect", which is a deviation from how most programmers view software development.

Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions.

I would definitely put Zig on the list of languages that made me think about programming differently.


I'm not sure how what you stated is different from writing highly performance C.

I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design.

Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since.

A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader).

Little decisions add up to make the language feel great to use and genuinely impressive for learning new things.

[1] https://ziglang.org/documentation/master/std/#std.SinglyLink...


I suspect most developers do not know C.

C is fine C++ is where they jumped the shark

C++ is far better than C in very many ways. It's also far worse than C in very many other ways. Given a choice between the two, I'd still choose C++ every day just for RAII. There's only so much that we can blame programmers for memory leaks, use-after-free, buffer overflows, and other things that are still common in new C code. At some point, it is the language itself that is unsuitable and insufficient.

C++ explored a lot of ideas that some modern languages borrowed. C++ just had to haul along all the cruft it inherited and built up.

No, C is not fine. It is a really bad language that I unfortunately have to code professionally.

It looks cool! No experience with Zig so can't comment on the accuracy, but I will take a look at it this week. Also a bit annoying that there is no PDF version that I could download as the website is pretty slow. After taking a look at the repository (https://github.com/zigbook/zigbook/tree/main), each page seems to be written in AsciiDoc, so I'll take a look about compiling a PDF version later today.

If there is a PDF version, please remember to give me one. Thank you in advance.

I'd suggest downloading the AsciiDoc files from the repository and converting them to PDF with Pandoc. You may also need to use pdftk, if you convert one file at a time and need to assemble those files into one PDF.

https://pandoc.org/

https://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/


It's pretty incredible how much ground this covers! However, the ordering feels a little confusing to me.

One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops.

It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming.


A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side.

Hmm, the explanation of Allocators is much more detailed in the book, but I feel although more compact, it seems much more reasonable in the language reference. [0]

I'll keep exploring this book though, it does look very impressive.

0 - https://ziglang.org/documentation/master/#Memory


It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig

> It's really hard to believe this isn't AI generated

Case of a person who is relying on LLMs so much he cannot imagine doing something big by themselves.


it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI

> it was pushed all at once

For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"`


Can you provide some examples where the text reeks of AI?

Literally the heading as soon as you click the submitted link

> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.

The "it's not X, it's Y" phrasing screams LLM these days


It's almost as though the LLMs were trained on all the writing conventions which are used by humans and are parroting those, instead of generating novel outputs themselves.

They haven’t picked up any one human writing style, they’ve converged on a weird amalgamation of expressions and styles that taken together don’t resemble any real humans writing and begin to feel quite unnatural.

The Uncanny Valley of prose.

Plenty of people use “it’s not X, it’s Y”

As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that.

Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality.


https://www.zigbook.net/chapters/45__text-formatting-and-uni...

The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure. And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms.

If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos.

(Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.)


I read the first few paragraphs. Very much reads like LLM slop to me...

E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it."

If we had a reliable oracle, I would happily bet a $K on significant LLM authorship.


I've had the same experience as you with Zig. I quite love the idea of it Zig but the undocumented churn is a bit much. I wish they had auto generated docs that reflect the current state of the stdlib, at least. Even if it just listed the signatures with no commentary.

I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:.


But you can tell your LLM to just go look at the source code (after checking it out so it doesn’t try 20s github requests). Always works like a charm for me.

Wait, doesn't `zig std` launch the autogenerated docs?

It’s currently broken, or was recently on the 0.16 dev branch (master)

   The book content itself is deliberately free of AI-generated prose. Drafts may start anywhere, but final text should be reviewed, edited, and owned by a human contributor.
There is more specificity around AI use in the project README. There may have been LLMs used during drafting, which has led to the "hallmarks" sticking around that some commenters are pointing out.

That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated...

Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point.

This source is really hard to trust. AI or not, the author has done no work to really establish epistemological reliability and transparency. The entire book was published at once with no history, no evidence of the improvement and iteration it takes to create quality work, and no reference as to the creative process or collaborators or anything. And on top of that, the author does not seem to really have any other presence or history in the community. I love Zig, and have wanted more quality learning materials to exist. This, unfortunately, does not seem to be it.

How do you feel about regular books, whose iterations and edits you dont see?

For books that are published in more traditional manners, digital or paper, there is normally a credible publisher, editors, sometimes a foreword from a known figure, reviews from critics or experts in the field, and often a bio about the author explaining who they are and why they wrote the book etc. These different elements are all signals of reliability, they help to convey that the content is more than just fluff around an attention-grabbing title, that it has depth and quality and holds up. The whole publishing business has put massive effort into establishing and building these markers of trust.

Do you have any criticism of the content, or just "I don't know the author"?

They didn't say "this is in error", so they don't need any such example errors. They also didn't say just "I don't know the author".

So despite this...

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

I just don't buy it. I'm 99% sure this is written by an LLM.

Can the author... Convince me otherwise?

> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.

> Welcome to the Zigbook. Your transformation starts now.

...

> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.

...

> This is not about memorizing syntax. This is about earning mastery.


Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725

After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.

It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.

It doesn't take away from the fact that someone used a bunch of time and effort on this project.


To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.

I agree with you. After reading a couple of the chapters I'd be surprised if this wasn't written by an LLM.

Did they actually spend a bunch of time and effort though? I think you could get an llm to generate the entire thing, website and all.

Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination.


I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.

Pangram[1] flags the introduction as totally AI-written, which I also suspected for the same reasons you did

[1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive


Keep in mind that pangram flags many hand-written things as AI.

> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.

> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.

> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.

> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.

I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.

https://www.reddit.com/r/teachingresources/comments/1icnren/...


How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.

Weird to me that nobody ever posts the actual alleged false positive text in these criticisms

I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist


Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.

> Doesn't mean that the author might not use AI to optimise legibility.

I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me:

> The Zigbook intentionally contains no AI-generated content—it is hand-written

If an LLM was used in any fashion, then this statement is simply a lie.


But then you cannot write that

"The Zigbook intentionally contains no AI-generated content—it is hand-written"


> Can the author... Convince me otherwise?

Not disagreeing with you, but out of interest, how could you be convinced otherwise?


To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.

I'm not sure, but I try my best to assume good faith / be optimistic.

This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.

If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc


note that the front page also says: "61 chapters • Project-based • Zero AI"

Git log / draft history

I wish AI had the self-built irony of adding vomit emojis to their sycophantic sentences.

I don't think so, I think it's just a pompous style of writing.

You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.

Can I also ask: so what if it is or it isn't?

While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.


You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.

As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?


> but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest ai generated text

Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does.


I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know.

Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing?

I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically.

Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born?

[1] https://poignant.guide/book/chapter-1.html


It's been alleged that a major source of training data for many LLMs was libgen and SciHub - hardly casual.

Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)?

IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.

Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.


What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.

One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.

If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.


It's 2025, people are going to use technology and its use will spread.

There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.

It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.


I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this?

To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology.

There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts.

(Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.)

Who cares?

Still better than just nagging.


Using AI to write is one thing, claiming you didn't when you did should be objectionable to everyone.

This.

I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.

But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?


Who wants to be so petty.

I'm sure there are more interesting things to say about this book.


So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me.

I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty?


[flagged]


I understand being okay with a book being generated (some of the text I published in this manual [1] is generated), I can imagine not caring that the author lied about their use of AI, but I really don't understand the suggestion I write a book about a subject I just told you I'm clueless about. I feel like there's some kind of epistemic nihilism here that I can't fathom. Or maybe you meant it as a barb and it's not that deep? You tell me I guess.

[1] https://maxbondabe.github.io/attempt/intro.html


I would rather care whether there is a book at all and whether it is useful.

> I write a book about a subject I just told you I'm clueless about

Use AI. Even if you use AI, it's still a lot of work. Or write a book about why people shouldn't let AI write their books.


I'm also concerned whether it is useful! That's why I'm not gunnuh read it after receiving a strong contrary indicator (which was less the use of AI than the dishonesty around it). That's also why I try to avoid sounding off on topics I'm not educated in (which is too say, why I'm not writing a book about Zig).

Remember - I am using AI and publishing the results. I just linked you to them!


> I'm also concerned whether it is useful!

So you could do everyone a favour by giving a sufficiently detailed review, possibly with recommendations to the author how to improve the book. Definitely more useful than speculating about the author's integrity.


I'm satisfied with what's been presented here already, and as someone who doesn't know Zig it would take me several weeks (since I would have to learn it first), so that seems like an unreasonable imposition on my time. But feel free to provide one yourself.

Well, there must have been a good reason why you don't like the book. I didn't see good reasons in this whole discussion so far, just a lot of pedantry. No commenter points to technical errors, inaccuracies, poor code examples, or pedagogical problems. The entire objection rests on subjective style preferences and aesthetic nitpicking rather than legitimate quality concerns.

I don't see what else I can say to help you understand. I think we just have very different values and world views and find one another's perspective baffling. Perhaps your preferred AI assistant, if directed to this conversation, could put it in clearer terms than I am able to.

My statement refers to this claim: "I'm 99% sure this is written by an LLM."

The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.


I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.

Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.


The book claims it’s not written with the help of AI, but the content seems so blatantly AI-generated that I’m not sure what to conclude, unless the author is the guy OpenAI trained GPT-5 on:

> Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software.

“Not just X - Y” constructions.

> By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery.

More not just X - Y constructions with parallelism.

Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism.

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

I don’t have anything against AI generated content. I’m just confused what’s going on here!

EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro?

EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts.


Every time I read things like this, it makes me think that AI was trained off of me. Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are all examples of how they teach to write essays in high school and college. They're also all examples of how I think and have learned to communicate.

I'm not sure what to make of that either.


To be explicit, it’s not general hallmarks of good writing. It’s exactly two common constructions: not X but Y, and 3 items in parallel. These two pop up in extreme disproportion to normal “good writing”. Good writers know to save these tricks for when they really want to make a point.

Most people aren’t great writers, though (including myself). I’d guess that if people find the “not X but Y” compelling, they’ll overuse it. Overusing some stylistic element is such a normal writing “mistake”. Unless they’re an extremely good writer with lots of tools in their toolbox. But that’s not most people.

I find the probability that a particular writer latches onto the exact same patterns that AI latches onto, and does not latch onto any of the patterns AI does not latch onto, to be quite low. Is it a 100% smoking gun? No. But it’s suspicious.

Interesting, I'll have to look for those.

But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate."

Clearly your perception of what is AI generated is wrong. You can't tell something is AI generated only because it uses "not just X - Y" constructions. I mean, the reason AI text often uses it is because it's common in the training material. So of course you're going to see it everywhere.

I sent the text through an AI detector with 0.1% false positive rate and it was highly confident the Zig book introduction was fully AI-written

Find me some text from pre-AI that uses so many of these constructions in such close proximity if it’s really so easy - I don’t think you’ll have much luck. Good authors have many tactics in their rhetorical bag of tricks. They don’t just keep using the same one over and over.

The style of marketing material was becoming SO heavily cargo-culted with telltale signs exactly like these in the leadup to LLMs.

Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too.

We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker.


You’re completely right, but blogs on the internet are almost entirely not written by great authors. So that’s of no use when checking if something is AI generated.

As someone who is diving deep into Zig, I’m actually going to evaluate all this (and compare this to Ziglings) or the Zig track on Exercism.

A lot of love went into this. It's evident throughout. Great job!

Nah, just a lot of prompting.

For me, personally, any new language needs to have a "why." If a new language can't convince me in 1-2 sentences why I need to learn it and how it's going to improve software development, as a whole, it's 99% bs and not worth my time.

DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig.


Hmmm what about this: https://ziglang.org/learn/why_zig_rust_d_cpp/

Convincing enough?


But can we train AI on this beautifully hand-crafted material, and ask it later to rewrite Rust with Zig? :]

It was very hard to find a link to the table of contents… then I tried opening it and the link didn’t work. I’m on iOS. I’d have loved to take a look quickly what’s in the book…


inb4 people start putting a standardized “not AI generated” symbol in website headers

Some text is unreadable because it is so small.

Why do we need another language?

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

I think it's time to have a badge for non LLM content, and avoid the rest.


There is also Brainmade: https://brainmade.org/

What's stopping AI made content from including this as well?

I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that."

The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons.

Also the logo seems to imply a plant has taken over this person and the content was made by some sort of body-snatched pod person.

If this gets any traction, AI bros on Twitter will put it on their generated images just out of spite.


Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated.

This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.



> Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way.

> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.

> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.

But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.


Right in those same first few paragraphs... "...hiding something from you. Because they are."

Would most LLMs have written that invalid fragment sentence "Because they are." ?

I don't think you have enough to go on to make this accusation.


Yes, that fragment in particular screams LLM to me. It's the exact kind of meaningless yet overly dramatic slop that LLMs love

These posts are getting old.

I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).

I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.

It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?


Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten.

> Would the book be any more or less useful if somebody used AI for writing it?

Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.


I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs.

I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.

[1]: https://www.pangram.com/ [2]: https://arxiv.org/pdf/2402.14873


The em dashes?

There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI.

"not just x but y" is definitely a tell tale AI marker. But, people can write that as well. Also our writing styles can be influenced as we've seen so much AI content.

Anyway, if someone says they didn't use AI, I would personally give them the benefit of the doubt for a while at least.


this construction is familiar to anyone who has taken a course on writing post middle or high school.

The formal version is "not only... but also" https://dictionary.cambridge.org/us/grammar/british-grammar/..., which I personally use regularly but I often write formally even in informal settings.

"not just... but" is just the less formal version.

Google ngrams shows the "not just ... but" construction has a sharp increase starting in 2000. https://books.google.com/ngrams/graph?content=not+just+*+but...

Same with "not only ... but also" https://books.google.com/ngrams/graph?content=not+only+*+but...

Like many scholarly linguistic construction, this is one many of us saw in latin class with non solum ... sed etium or non modo ... sed etium: https://issuu.com/uteplib/docs/latin_grammar/234. I didn't take ancient Greek, but I wouldn't be surprised if there's also a version there.

More info

- https://www.phrasemix.com/phrases/not-just-something-but-som...

- https://www.merriam-webster.com/dictionary/not%20just

- https://www.grammarly.com/blog/writing-techniques/parallelis...

- https://www.crockford.com/style.html

- https://englishan.com/correlative-conjunctions-definition-ru...


Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned.

Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.


I guess now the trend is Zig. The era of Javascript framework has come to end. After that was AI tend. And now we have Zig and its allocators, especially the arena allocator.

/S


[flagged]


Username definitely doesn't check out on this comment. Please try again.

There's an actual production grade database written in zig: https://tigerbeetle.com/

ghostty and bun aren't real world enough for you?

They're not. It's real world when there's a market for paying Zig jobs, not when you can list a few github repos that use it.

Simple: your priors are wrong. People use Zig.

Even if what you say is true, people make bets on new tech all the time. You show up early so you can capture mindshare. If Zig becomes mainstream then this could be the standard book that everyone recommends. Not just that, it’s more likely the language succeeds if it has good learning materials - that’s an outcome the author would love.

> people make bets on new tech all the time. You show up early so you can capture mindshare.

I got on the ground floor with elixir. got my startup built on it. now we have 3 fulltime engineers working on elixir fulltime. None of that would have happenned if I looked at a young language and said "its not used in the real world"


"nobody uses in the real world yet" is uncharitable, as Zig is used in many real-world projects (Bun and Tigerbeetle are written in Zig, for example). But there's value being at the forefront of technologies that you think are going to explode soon, so that's how people find time and energy, I guess.

there's no way someone made this for free, where do I donate? im gonna get so much value from this this feels like stealing

It's AI-written FWIW

though maybe AI is getting to the point it can do stuff like this somewhat decently


Dang duped again

The first page says none of the book was written by AI

Yes, it's a false claim

how do you know this? let us know please, thanks. edit, I see you used this to check: https://news.ycombinator.com/item?id=45948220

pangram.com, the most accurate and lowest false positive AI detector

https://www.pangram.com/blog/third-party-pangram-evals


Why does this feel like an ad? I've seen pangram mentioned a few times now, always with that tagline. It feels like a marketing department skulking around comments.

The other pangram mention elsewhere in this comment section is also me -- I'm totally unaffiliated with them, just a fan of their tool

I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work


FWIW I work on AI and I also trust Pangram quite a lot (though exclusively on long-form text spanning at least 4 or more paragraphs). I'm pretty sure the book is heavily AI written.

SAME. I was looking for a donation button myself! I've paid for worse quality instructional material. this is just the sort of thing I'm happy to support

Need this but to learn AI

They named a programming language after a wireless protocol?

What is it with HN and the "oh, I thought {NAME} is the totally different tool {NAME}" comments? Is it some inside joke?

Or just incredulity that people naming a technology are ignorant of the fact that another well-known technology is already using it.

¯\_(ツ)_/¯


One is Zig the other is Zigbee, I don't understand your comment...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: