Hacker Newsnew | past | comments | ask | show | jobs | submit | f311a's commentslogin

PHP becomes a complex language with each update. For what reason? Its application is still limited to the web, mostly.

A lot of C# and Java code is oriented towards web backends, too. Which are quite big and complex. So it seems natural that languages in the same design space (trad OO) converge on similar features. I think the only exception these days is Go.

I think these days you could change "You can write Fortran in any language" to "You can structure your code like Spring in any language"…


The web is not getting any simpler.

Exactly, I'd even argue that web development (back & front-end) is by far the largest job / industry in software development.

I don't see how making a language more complex can help with that. Complex languages makes sense for system programming where you want to squeeze some performance.

Making a language more complex often leads to simpler code. Keeping a language too simple often forces overly complex code.

Obviously you haven't heard of NativePhp :)

Still, even if it was "just" the web, why wouldn't it evolve? It's a great language, with a big user base, and there's always room for improvements and to increase the developer experience


I downvoted you before reading the fine article. I'm back to correct that.

The new array_first() and array_last() functions are nice, everything else is either reimplantation of existing features or "features"which will make maintainability more difficult. The pipe operator is one such example. I don't need it - these nested methods are not really an issue in any codebase I've seen. The new syntax only works for unary functions, so higher arity functions must be wrapped in an arrow function. It's a mess and more bug prone than just nesting the functions.


> The pipe operator [...] syntax only works for unary functions, so higher arity functions must be wrapped in an arrow function.

It's coming - but to get PHP RFCs to pass they have to be salami-sliced, otherwise they're voted down.

https://wiki.php.net/rfc/partial_function_application_v2


The pipe operator makes it much easier to create home-grown cryptographic hash functions, as everybody used to do in the early 2000s:

    md5($password)
    |> sha1(...)
    |> sha1(...)
    |> md5(...)
    |> rot13(...)
    |> crc32(...)

Oh, now I'm convinced!

Many of the newer features have this problem. Like the match keyword, enums, closures etc. They are half-baked versions of what could be powerful and expressive features.

Meanwhile it seemingly abandoned features and unique selling points, like the in-built templating, associative arrays with value semantics and the fact that it integrates well with C or the simple fact that it can be used to write web server scripts very easily. To me, many of these cool features have been largely ignored or even moved away from.


>Like the match keyword, enums, closures etc. They are half-baked versions of what could be powerful and expressive features.

The problem is that the php project is maintained by (mostly) unsponsored contributors. There’s not a giant corporation behind it. Each of these new features are designed by a couple people (per rfc) and then discussed and voted by other contributors. The match keyword, for example, is consider as the future scope of this rfc which is still being worked on: https://wiki.php.net/rfc/pattern-matching

Also, a lot of these half baked features are designed to be implemented in steps because of what I said in my other paragraph and to increase the odds of being accepted (it’s well known that it’s hard to get an rfc accepted and a lot of good ones haven’t been able to pass the voting phase).

When you consider this, it’s amazing that we get so much from so little.


I don't understand how this is more bug prone. The nesting is a HUGE problem because it's extremely unreadable. The arrow is much more readable, and array functions are very simple too.

How many tubes does that appear in any codebase you've worked on? Those types of constructions are typically hidden away in "helper functions" that are straightforward to understand and debug. They are certainly not typical in business logic.

Now we have two syntax to be familiar with and maintain. And you just know the clowns are going to come along and convert one syntax to the other because they can, which completely ruins the git commit and you can't tell what actual functionally was changed at a glance. There will be bickering and bikeshedding about "modernizing" codebases.


I don't know, I think it's really fine. It's very typical to compose a bunch of array functions together and, honestly, I think that's good form most of the time. This makes that a little bit easier.

I'm sort of on the opposite side. I'm unconvinced by extra functions for arrays in the global namespace and I'm happier about syntactic improvements and debugging improvements.

I'm glad to hear a dissenting opinion. Interesting that we experience PHP differently.

What is your background? I've been working mostly with PHP and Python for about 25 years. Do you come from e.g. Java?


A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

They win because of quantity, not quality.

But still, I don't trust Anthropic's report.


The security world overemphasizes (fetishizes, even,) the "advanced" part because zero days and security tools to compensate against zero days are cool and fun, and underemphasizes the "persistent" part because that's boring and hard work and no fun.

And, unless you are Rob Joyce, talking about the persistent part doesn't get you on the main stage at a security conference (e.g., https://m.youtube.com/watch?v=bDJb8WOJYdA)


uv and ruff use that Approach. They are Python related tools, but written 100% in Rust.


Not sure about Claude, but OpenAI models are pretty bad at Go for some reason. For example, they always want to replace the "new" style for-loop that uses range with the old syntax. This drives me nuts.


Installing Rust projects is usually as simple as calling `cargo install`. No need to wait for 5-30 minutes until LLM figures this out.

People are delegating way too much to LLMs. In turn, this makes your own research or problem-solving skills less sharp.


Quoting my article:

> This is a useful starting point for a project like this—in figuring out the compile step the coding agent gets seeded with a little bit of relevant information about the project, and if it can compile that means it can later partially test the code it is writing while it works.

"Figure out how to build this" is a shortcut for getting a coding agent primed for future work. If you look at the transcript you can see what it did: https://gistpreview.github.io/?ddabbff092bdd658e06d8a2e8f142...

  • Ran ls
    └ AGENTS.md
      CHANGELOG.md
        +13 lines
      scripts
      sdk
  
  • Ran cat README.md
    └ <p align="center"><code>npm i -g @openai/codex</code><br />or <code>brew
      install --cask codex</code></p>
  
        +105 lines
  
      This repository is licensed under the [Apache-2.0 License](LICENSE).
  
  • Ran rg -n "rust" -n docs/install.md
    └ 23:curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
      25:rustup component add rustfmt
      26:rustup component add clippy
  
  • Ran sed -n '1,200p' docs/install.md
    └ ## Install & build
  
        +36 lines
      cargo test
      ```
  
  • I’m gathering the build steps for the Rust tool, focusing on running cargo
    build inside the codex-rs directory. I'll check dependencies and confirm
    the instructions before actually building, aiming to provide a clear summary
    afterward.
  
  • Ran cargo build
    └     Updating git repository `https://github.com/nornagon/crossterm`
          Updating git repository `https://github.com/nornagon/ratatui`
        +825 lines
         Compiling codex-cli v0.0.0 (/Users/simon/Dropbox/dev/codex/codex-rs/cli)
          Finished `dev` profile [unoptimized + debuginfo] target(s) in 1m 27s
  
  • Ran ls target/debug | head
    └ apply_patch
      apply_patch.d
        +6 lines
      codex-execpolicy
That's a decent starting point on seeding the context with information that's relevant to making and then testing the modifications I'm about to ask for.


What useful context is in there? How to call “cargo build”? It already knows that.


The README files that it looked at, and the directory structure it explored.

It now knows what the project is, what dependencies it uses, how it's laid out and the set of binaries that it generates.

Even more importantly: it knows that the project can be built without errors. If it tries a build later and sees an error it will know that the error was caused by code it had modified.


You could just tell it to check out readme, but I suspect it would have checked it out anyway or figured out the type of project and how it is structured as a first step of any other command you give it as without it it is impossible to add or update the project.


For a Rust developer, neglecting their ability to debug cargo build issues puts their career at risk. For someone like that, letting AI handle it would be a really shortsighted move.

But Simon isn’t a Rust developer - he’s a motivated individual with a side project. He can now speedrun the part he’s not interested in. That doesn’t affect anyone else’s decisions, you can still choose to learn the details. Ability to skip it if you wish, is a huge win for everyone.


> He can now speedrun the part he’s not interested in.

The reductio that people tend to be concerned about is, what if someone is not interested in any aspect of software development, and just wants to earn money by doing it? The belief is that the consequences then start becoming more problematic.


Those people are their own worst enemies.

Some people will always look for ways to "cheat". I don't want to hold back everyone else just because a few people will harm themselves by using this stuff as a replacement for learning and developing themselves.


Do you genuinely believe that this only applies to "a few people"?

This new post gets at the issue: https://news.ycombinator.com/item?id=45868271


I don't understand the argument that post is making.

I agree that people using LLMs in a lazy way that has negative consequences - like posting slop on social media - is bad.

What's not clearly to me is the scale of the problem. Is it 1/100 people who do this, or is it more like 1/4?

Just a few people behaving badly on social media can be viewed by thousands or even millions more.

Does that mean we should discard the entire technology, or should we focus on teaching people how to use it more positively, or should we regulate its use?


>> He can now speedrun the part he’s not interested in

In this case its more like slowrunning. Building rust project is 1 command and chatgpt will tell you this command in 5 seconds.

Running an agent for that is 1000x more inefficient.

At this point its not optimizing or speeding things up but running agent for the sake of running agent.


The best thing about having an agent figure this out is you don't even need to be at your computer while it works. I was cooking dinner.


You’re not properly accounting for the risk of getting blocked on one of these 5 second tasks. Do an expected value calculation and things look very different.

Across a day of doing these little “run one command” tasks, even getting blocked by one could waste an hour. That makes the expected value calculation of each single task tilt much more in favor of a hands off approach.

Secondly, you’re not valuing the ability to take yourself out of the loop - especially when the task to be done by AI isn’t on the critical path, so it doesn’t matter if it takes 5 minutes or 5 milliseconds. Let AI run a few short commands while you go do something else that’ll definitely take longer than the difference - maybe a code review - and you’ve doubled your parallelism.

These examples are situational and up to the individual to choose how they operate, and they don’t affect you or your decisions.


The most important thing is to have it successfully build the software, to prove to both me and itself that a clean compile is possible before making any further changes.


Suggestion: make a “check.sh” script that builds everything, lints everything, and runs all (fast) tests. Add a directive in the agent system prompt to call it before & after doing anything. If it fails, it will investigate why.


In situations like this is better to ask the agent to write a short document about how to run the project. Then you read it and delete useless parts. Then you ask the agent to follow that document and improve it until the software builds. By the final step, you get a personalized README.md for your needs.


Figuring out how to build a project in an unfamiliar language/build system is my least favourite activity, mainly because all the people who are familiar with those tools think it's "as simple as" and don't bother to write it down anywhere. I don't plan on learning every build system ever.


I did not know how to do X so I Y.

It would be interesting to know what kinds of responses humans offer across different values of Y such as:

1) looked on stack overflow 2) googled it 3) consulted the manual 4) asked an LLM 5) asked a friend

For each of these, does the learner somehow learn something more or better?

Is there some means of learning that doesn't degrade us as human beings according to those in the know?

I ask as someone who listens to audiobooks and answers yes when someone asks me if I've read the book. And that's hardly the extent of my transgressions.


At least if you're copy/pasting from stack overflow you presumably glanced at the change you are copying if only to ensure you select the correct text.


Good point. We also sometimes leave comments in code noting the thread we referenced.


Yeah because the code on stack overflow has a license.


That would be a great reason to include the link. Would have been good idea for me to think about that 5-10 years ago. I just did it because I thought it might be helpful and it's cheap. Woops!



You forgot to read the readme


I did! How could I forget that and RTFM? The code is self-documenting! Lol.


Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days. It's like a professional plumber being unfamiliar with solder. Like, yeah, you can do a lot without soldering, but imagine a pro just straight up not having a clue about solder.

It's worth learning how to do this stuff. Not just because you then know that particular build system, but because you get better at learning. Learning how to learn is super important. I haven't come across a new project that's taken me more than a few minutes to figure out how to build in years.


> Ok but I'd argue Rust/Cargo shouldn't be an "unfamiliar language/build system" for most professional programmers these days.

This isn't even close to true. The majority of programmers will be fine going their entire career without even knowing what Rust is, let alone how to build Rust projects.

A more accurate analogy would be a plumber not knowing how his wrench was manufactured.


Conversely: I’ve been a professional programmer for nearly 20 years and I’ve never touched rust/cargo - I’ve never needed to.

(Though in general I do agree with “it’s worth learning how to do this stuff.)


Rust ranks 16th on the current TIOBE (https://www.tiobe.com/tiobe-index/), behind assembly, PHP and R. It is still not remotely as popular (as "based on the number of skilled engineers world-wide, courses and third party vendors") as C or C++ (to say nothing of how dominant Python has become).

The supposed ubiquity of Rust is the result of a hype and/or drama bubble.


If you've never used Rust before, I couldn't find good documentation on how to run a existing Rust project nor could I find `cargo install` on the "Getting Started" page. I could read the Cargo Book, or check `--help` I guess, but this can be surprisingly time consuming as well, it might take 5-30 minutes of active searching to locate the information. If you can, try put yourself in a beginner's mindset and think though your argument again.

Regarding your second point, I think people actually underutilise LLMs for simple tasks. Delegating these tasks frees up your problem-solving skills for challenges that truly need human insight. In this case, asking an LLM is arguably the smart choice: it's a common task in training data, easy to verify, and low-risk to run and not a direct learning or benefit for your initial question.


You don’t need to cargo install anything. You just need cargo itself, which is linked on the main page. Once you have that, here’s an example google search that gives you all the info you need to run the project (hint: `cargo run`)

https://www.google.com/search?q=how+do+I+run+a+rust+project


Thanks for the Google link, I was just asking GPT-5-Pro "How to Google: 'How do I run a rust project'", and am still waiting for the answer... the point was that searching for an answer (wherever/however) is not necessary in some cases, like this one, but asking the AI agent to find a solution can be sufficient and is totally ok. Engineers are allowed to delegate, there is not nothing wrong with this.


> In turn, this makes your own research or problem-solving skills less sharp.

Why would that be true? The average assistant is certainly typing more quickly than their boss, but most people would not find issue in that. It's different responsibilities. You free up time to research / problem-solve other things.

> No need to wait for 5-30 minutes until LLM figures this out.

I don't care it the LLM takes 15 additional minutes to figure it out, if it net saves me a minute (we could certainly debate the ergonomics of the multitasking involved, but that is something that every person, who delegates work, has to deal with and that's not unique to working with LLMs in any way)


I got excited about LLM agents thinking it was just about "faster typing". A lot of us have dreamed of a day where we can just transfer what we have in mind directly into the computer, skipping the laborious manual keying step. But using an LLM is not that. It's not that at all.

Instead they let you type vague or ambiguous crap in and just essentially guess about the unclear bits. Hadn't quite thought through which algorithm to use? No worries, the LLM will just pick one. Hadn't considered an edge case? No worries, the LLM will just write 100 lines of code that no sane programmer would ever go through with before realising something isn't right.

I've made the mistake of being that senior who is way too eager to help juniors many times in my career. What happens is they never, ever learn for themselves. They never learn how to digest and work through a problem. They never learn from their mistakes. Because they can always just defer to me. LLMs are the worst thing to happen for these people because unlike a real person like me the LLM is never busy, never grumpy and nobody is taking notes of just how many times they're having to ask.

LLMs are really useful at generating boilerplate, but you have to ask yourself why you're spending your days writing boilerplate in the first place. The danger is it can very quickly become more than just boilerplate and before you know it you've forgotten how to think for yourself.


Sometimes avoiding boilerplate is out of scope. I’m currently using an LLM agent to write a Home Assistant integration. The LLM is happy to write boilerplate crap to interact with the terrible Home Assistant API without complaining about it. Sure, some of the code it writes is awful, and I can fix that. (The record was about 15 lines of code, including non-functional error handling, to compute the fixed number zero.)

Becoming proficient at banging out Home Assistant entities and their utterly ludicrous instantiation process has zero value for my career.


I see where you're coming from. But I often find that when I have some idea or challenge that I want to solve, I get bogged down in details (like how do I build that project)... before I even know if the idea I _wanted_ to solve is feasible.

It's not that I don't care about learning how to build Rust or think that it's too big of a challenge. It's just not the thing I was excited about right now, and it's not obvious ahead of time how sidetracked it will get me. I find that having an LLM just figure it out helps me to not lose momentum.


I have used Rust for decades (yeah, Rust is that old) and want to point out that that's not always the case, especially when FFI is involved. At some point, for example, any Rust crate with the `openssl` dependency used to require a special care every time `cargo install` gets run. Cargo itself is super nice; other tools, still not so much.


I would have done the same thing. I know how to build software in a dozen or more languages. I've done it manually, from scratch, in all of them. I don't know Rust. I have no immediate plan to learn Rust. I vaguely know that Cargo is something in the Rust toolbox. I don't have it installed. I don't particularly want to learn anything about it. It's a whole lot easier for me to tell the LLM to figure that out.

I might learn Rust some day. At the moment, I don't need the mental clutter.


Well, fyi because it is really simple: if you have rust installed, you have cargo installed too. And to run a project you type “cargo run” from the base directory. That is all.


You get a build error because the rust version you have installed is incompatible with the codebase. Now you have to install rustup and...


I don't doubt that the process is ultimately fairly easy, but it always looks easier from inside the bubble. You have to know that Cargo is part of Rust; what the best way to install it is on your system; what tooling you should use if you don't want headaches later; what to do if you don't want to or can't install system-level packages; what minimum versions of Rust and Cargo are assumed by application developers this week; what additional dependencies are required; etc.

It's my understanding that building Rust applications still requires a C toolchain, and packages are still going to be dependent on things like having the openssl dev headers/libraries installed. That's fine, that's normal for building software, but it's not as trivial as "just git-clone this Rust source repo and run one command and everything will work".

I'm certain I could get up and running quickly. I'm also certain I'd have to install a bunch of stuff and iterate past multiple roadblocks before I was actually able to build a Rust application. And finally I'm certain I could get Claude or Codex to do it all a lot faster than if I muddled through it myself for half an hour.


Execute the one-liner here: https://rustup.rs/

Then cd dir && cargo run

I get what you’re saying, but rust has really set the bar (lowered the bar?) for making it easy, so it’s a bad example to pick on.


Funny you say that, because I have the opposite opinion.

It is easy for any of us to quickly bootstrap a new project in whatever language. But this takes a cognitive toll, and adds friction to bring our visions online.

Recently, I went "blind" for a couple of days. My vision was so damaged I could only see blurs. The circumstances of this blindness are irrelevant, but it dawned on me that if I were blind, I could no longer code as I do.

This realization led me to purchase a Max subscription to Claude Code and rely more on LLMs for building, not less.

It was much more effective than I thought it would be. In my blurred blindness, I saw blobs of a beautiful user interface form, from the skeleton of my Rust backend, Vue3 frontend. It took my (complex backend in Rust) and my frontend scaffolding to another level. I could recognize it via the blur distinctly. And it did this in minutes / hours instead of days.

As my vision returned, I began analyzing what happened and conducting experiments. My attitude changed completely. Instead of doing things myself, directly, I set out to make the LLM do it, even if it took more time.

It is painful at first. It makes very stupid mistakes that make an experienced engineer snarl at it, "I can do better myself". But my blindness gave me new sight. If I were blind, I couldn't do it myself. I would need help.

Instead of letting that ego take over, I started patiently understanding how the LLM best operates. I discovered mainly it needs context and specific instructions.

I experimented with a DSL I made for defining LLM instructions that are more suited for it, and I cannot describe the magic that started unfolding.

Now, I am writing a massive library of modular instructions for LLMs, and launching them against various situations. They will run for hours uninterrupted and end up implementing full code bases, with complete test suites, domain separation, and security baked in.

Reviewing their code, it looks better than 90% of what I see people producing. Clear separation of concerns, minimal code reuse, distinct interface definitions, and so much more.

So now, I have been making it more and more autonomous. It doesn't matter if I could bootstrap a project in 30 seconds. If I spend a few hours perfecting the instructions to the LLM, I can bootstrap ANY project for ANY LANGUAGE, forever.

And the great thing? I already know the pattern works. At this point, it is foolish for me to do anything other than this.


Just as a quick datapoint here in case people get worried; yes, it is absolutely possible to program as a blind person, even without language models. Obviously you won't be using your eyes for it, but we have tried and tested tools that help and work. And at the end of the day, someone's going to have to review the code that gets written, so either way, you're not going to get around learning those tools.

Source: Am a blind person coding for many years before language models existed.


Thank you for sharing your experience. It provides me a bit of comfort to know it's possible for me to keep coding in the event of vision loss, and I'm glad tools exist for people that are blind.

A part of me wants to start using the available tools just to expand my modalities of interfacing with technology. If you have the time, any recommendations? What do you use?


The DSL sounds interesting, if you talk about it anywhere I'd definitely be interested in reading more!


I'll look forward to sharing my findings.


In turn, this makes your own research or problem-solving skills less sharp.

That's one way to think about it, but on the other hand, where's the "skill" in knowing a particular CLI invocation for a particular tool or installation task? Next year there will be a Better Way to Do It. (Witness how many trendy package installers / venv managers the Python community has gone through.)

An LLM's job is to translate what I want to do into instructions or commands that actually do it. Real skill involves defining and directing the process; the implementation details are just temporary artifacts. Memorized command lines, mastery of specific tools, and conflation of rote procedures with "skills" are what keeps languages like C around for 50 years, long after the point where they begin to impede progress.


I've been thinking about this a lot recently. I've been using Git for about 20 years, but I've never considered myself a Git expert because I have to look up how to do things like a git bisect or a history rewrite beyond just the first "git commit --amend"

But now... I use a ton of advanced Git features several times a week, because just knowing that it's possible to do something is enough for me to tell Codex or Claude Code to do that thing.

So maybe Git mastery now is more about concepts? You need to understand the DAG and roughly what's in the .git folder and origins and branches and tags and commits and so forth, but you don't need to remember the syntax.


Exactly, and now imagine that somebody introduces a tool even better than Git. Normally that tool would have major adoption problems because it would have to dislodge the incumbent standard that everybody's familiar with, and that everybody can at least tolerate.

But if you aren't even issuing commands directly to Git, suddenly it starts to look like there is room for improvement without the pain of learning a new tool or a new paradigm. That's a bigger deal than I think most people appreciate.


true bunch of regards in the making like why I need you to do basic ass shit like this, no wonder we see mass layoffs


Leonid Boguslavsky started triathlon at 62 and had two femoral neck fractures because of it. He's now 74. I think he also has a knee prosthesis now.

Full triathlon distance has nothing to do with health.


It's very rare to see 9K PRs by humans. They usually take weeks of work if it's a serious change. No one wants to spend such an amount of time just to get rejected. You split it and send one by one while discussing changes.

But if it takes 1 hour with AI, you just send it right away...


It's the problem. I often have to guide LLMs 2-4 times to properly write 150-300 LOC changes because I see how the code can be simplified or improved.

There is no way that 9000 lines of code are decent. It's also very hard to review them and find bad spots. Why spent your time in the first place? It probably took one hour for a person to generate it, but it will take ten to review and point out hundreds (probably) problems.

Without AI, no one would submit 9000 lines, because that's tens of hours of work which you usually split into logical parts.


We spent 3 days trying to properly integrate their tunnels to our internal network. I took us 3 hours to integrate tailscale.

Tunnels are poorly documented.


>Tunnels are poorly documented.

I'd tend to agree with that, but I was able to find some youtube videos of people setting them up. It was still a little bit of a challenge though because they have moved the menus all around in the last few months, so even the most recent videos I could fine were pointing to locations that didn't exist and I had to go hunting for them.

I would have preferred to just use tailscale for this, but we are using headscale and want to make a service available to our sister company, that doesn't have e-mails in our Google Workgroup where we have the OIDC for auth, so they can't be part of our tailnet without buying them logins or setting up accounts in keycloak or similar.


> they can't be part of our tailnet without buying them logins

I'm pretty sure you can use Cloud Identity Free accounts to do this. I've done something similar with OIDC and it didn't cost anything.


I've used bisect a few times in my life. Most of the time, I already know which files or functions might have introduced a bug.

Looking at the history of specific files or functions usually gives a quick idea. In modern Git, you can search the history of a specific function.

    >> git log -L :func_name:path/to/a/file.c
You need to have a proper .gitattributes file, though.


Alternatively if you do not have that set up, `git log -S` helps you find commits whose diff contain a specific string.


Can you elaborate on the dependent .gitattributes file? Where can I find more information on the necessary content? Sounds super useful!


You need to specify diff format, so that Git can correctly identify and parse function body.

*.py diff=python


Thanks!


I use git bisect literally every day. We are clearly different people :)


I don’t use it for myself often, but I use it fairly often when someone has to escalate a problem to me. And how you work when the shit hits the fan says a lot about you overall, IMO.


Basically any time I'm like "huh, that's weird," even if it is not a bug, I bisect and see when that behavior was introduced. Because (1) this is trivial and no work to do (`git bisect run` is completely autonomous), and (2) it gets me to the commit that introduces the change, which has all the context that might tell me why it is acting that way.

Nothing annoys me more than a codebase with broken commits that break git bisect.


Ah, in that case the JetBrains diff tool lets you annotate inside the diff window and I can usually walk back to where this possible off by one error was first authored that way.

It probably would be slightly faster to jump to bisect. But it’s not in my muscle memory.


I'm not sure what you mean by "annotate inside the diff window"?

If you mean see what commit added code, that's what git-blame is for.

Bisect is for when you don't know which code, just the behavior.


Git blame doesn’t show you why line 22 has a bug in it. It only shows you who touched it last. And that’s if nobody fucked up a merge.

A single line of code can have half a dozen authors. You have to pick back through the history and keep running git blame until you determine who put the bug in that line, and why.

If you show the side by side diff for a commit in JetBrains, you can click show annotations and it’ll show you the blame for the before. And then you can keep going back. For a large file that saves you having to go through all the other commits that were done in completely other parts of the file. Which can be a lot for large files.


That sounds more complicated than git bisect. When I bisect I have a short test script that confirms the bug. Usually, I already have this because part of the bug report/identification. I then run "git bisect run path/to/bug.sh". That's it -- it will output which commit caused the change. Those occasional times I need to confirm the presence of actual text, I use sh -c "git grep ..." as the test command.


I use this often, but it is sadly weak when used on C++ code that includes polymorphic methods/functions:

  /* snip */

  void
  Object::do_the_thing (int)
  {
  }

  void
  Object::do_the_thing (float)
  {
  }

  /* snip*/
AFAICT, git log will never be able to be told to review the history of the second version.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: