Hacker Newsnew | past | comments | ask | show | jobs | submit | wonnage's commentslogin

It’s kinda funny that the “not …, but…” + em dash slop signifiers translate directly to Chinese “不是。。。而是。。。” and double-width em dash

It’s not like people are piling into AI at the expense of other jobs, tech hiring and wages on general seem down relative to a couple of years ago. Hard to see the link between either effect and AI

Also weird that Dutch disease wasn’t mentioned at all, it actually seems more relevant.


But what about an ML person roped into writing an AI assisted blogpost about security

Driving a massive truck in the city is stupid too and most short flights should be replaced with high speed rail. And AI wastes a monumental amount of resources.

They all have optional ide integration, e.g Claude knows the active vscode tab and highlighted lines.


Is that better than Cursor? Same? Just different?


All I can say is when I switched from Cursor to Claude it took me less than 24 hours to realise I wouldn’t go back. The extra UI Cursor slaps on to VS Code is just bloat, which I found quite buggy (might be better now though), and the output was nowhere near as good. Maybe things have improved since I switched but Claude CLI with VS Code is giving me no reasons to want to try anything else. Cursor seemed like a promising and impressive toy, Claude CLI is just a great product that’s delivering value for me every day.


Vscode has agents built in now, have you used that UI?


That particular part is the same, roughly. The bigger issue is just that CC's a better agent than Cursor, last I checked.

There's even an official Anthropic VS Code extension to run CC in VS Code. The biggest advantage is being able to use VS Code's diff views, which I like more than in the terminal. But the VS Code CC extension doesn't support all the latest features of the terminal CC, so I'm usually still in the terminal.


TDD is testing in production in disguise. After all, bugs are unexpected and you can’t write tests for a bug you don’t expect. Then the bug crops up in production and you update the test suite.


TDD has always been about two things for me; be able to move forward faster because I have something easy to execute that compares it against the known wanted state, and in the future preventing unwanted regressions. I'm not sure I've ever thought of unit testing as "prevent potential future bugs", mostly up front design prevents that, or I'd use property testing, but neither of those are inside the whole "write test then write code" flow.


The intended workflow of TDD is to write a set of tests before some code. The only reason that makes sense conceptually is to prevent possible future bugs from going undetected.

Put another way if your TDD always pass then there’s no point in writing them, and there’s no known bugs before you have any code. So discovering future bugs that didn’t exist when you’re writing those tests is the point.


But with tests you can only prevent those future bugs you managed to think of. Anything you didn't anticipate will not be covered by tests.

TDD is useful to build some initial "guard rails" when writing new code and it's useful to prevent regressions (by adding more guard rails when you notice the program went off the road). You can't just add "all the guard rails ever needed" in advance.


Some classes of bugs need specific tests to find, but I can catch a spelling error without specifically looking for a spelling error.

Similarly, bugs often crop up because of interactions which aren’t obvious at the time. Thus the reason a test is failing can be wildly different than the intended use case of a test. Perhaps the test failed because the continuous integration environment has some bad RAM, you’ll need to investigate to discover why a test fails.


Honestly the way I use testing these days is as a more persistent version of a Jupyter notebook. Some piece of code is just complex enough I don't fully understand it, so hopefully the test framework in language of choice will make it easy enough to isolate it and right a bunch of quick to execute explorations of things I expect and do not expect about it.


I don’t really understand how to write tests before the code… When I write code, the hard part is writing the code which establishes the language to solve the problem in, which is the same language the tests will be written in. Also, once I have written the code I have a much better understanding as the problem, and I am in a way better position to write the correct tests.


You write the requirements, you write the spec, etc. before you write the code.

You then determine what are the inputs / outputs that you're taking for each function / method / class / etc.

You also determine what these functions / methods / classes / etc. compute within their blocks.

Now you have that on paper and have it planned out, so you write tests first for valid / invalid values, edge cases, etc.

There are workflows that work for this, but nowadays I automate a lot of test creation. It's a lot easier to hack a few iterations first, play with it, then when I have my desired behaviour I write some tests. Gradually you just write tests first, you may even keep a repo somewhere for tests you might use again for common patterns.


I want to have a CUDA based shader that decays the colours of a deformable mesh, based on texture data fetched via Perlin noise, it also has to have a wow look as per designer requirements.

Quite curious about the TDD approach to that, espcially taking into account the religious "no code without broken tests" mantra.


Break it down into its independent steps, you're not trying to write an integration test out of the gate. Color decay code, perlin noise, etc. Get all the sub-parts of the problem mapped out and tested.

Once you've got unit tests and built what you think you need, write integration/e2e tests and try to get those green as well. As you integrate you'll probably also run into more bugs, make sure you add regression tests for those and fix them as you're working.


Got to figure that TDD for the UX wow designer part.


TDD is terrible for anything where the hard part is the subjective look and feel.


1. Write test that generates an artefact (e.g. picture) where you can check look and feel (red).

2. Write code that makes it look right, running the test and checking that picture periodically. When it looks right, lock in the artefact which should now be checked against the actual picture (green, if it matches).

3. Refactor.

The only criticism ive heard of this is that it doesnt fit some people's conceptions of what they think TDD "ought to be" (i.e. some bullshit with a low level unit test).


You can even do this with LLM as a judge as well. Feed screenshots into a LLM as a judge panel and get them to rank the design 1-10. Give the LLM judge panel a few different perspectives/models to get a good distribution of ranks, and establish a rank floor for test passing.


Parent mentioned "subjective look and feel", LLMs are absolutely trash at that and have no subjective taste, you'll get the blandest designs out of LLMs, which makes sense considering how they were created and trained.


LLMs can get you to about a 7.5-8/10 just by iterating itself. The main thing you have to do is just wireframe the layout and give it the agent a design that you think is good to target.


Again, they have literally zero artistic vision and no, you cannot get an LLM to create a 7.5 out of 10 web design or anything else artistic, unless you too miss the facilities to properly judge what actually works and looks good.


You can get an AI to produce a 10/10 design trivially by taking an existing 10/10 design and introducing variation along axes that are orthogonal to user experience.

You are right that most people wouldn't know what 10/10 design looks/behaves like. That's the real bottleneck: people can't prompt for what they don't understand.


Yeah, obviously if you're talking about copying/cloning, but that's not what I thought the context here was, I thought we were talking about LLMs themselves being able to create something that would look and feel good for a human, without just "Copy this design from here".


That only works for the simplest minimally interactive examples.

It is also so monumentally brittle that if you do this for interactive software, you will drive yours nuts trying.


TDD fits better when you use a bottom up style of coding.

For a simple example, FuzzBuzz as a loop that has some if statements inside is not so easy to test. Instead break it in half so you have a function that does the fiddly bits and a loop that just contains “output += MakeFizzBizzLineForNumeber(X);” Now it’s easy to come up tests for likely mistakes and conceptually you’re working with two simpler problems with clear boundaries between them.

In a slightly different context you might have a function that decides which kind of account to create based on some criteria which then returns the account type rather than creating the account. That function’s logic is then testable by passing in some parameters and then looking at the type of account returned without actually creating any accounts. Getting good at this requires looking at programs in a more abstract way, but a secondary benefit is rather easy to maintain code at the cost of a little bookkeeping. Just don’t go overboard, the value is breaking out bits that are likely to contain bugs at some point where abstraction for abstraction’s sake is just wasted effort.


That's great for rote work, simple CRUD, and other things where you already know how the code should work so you can write a test first. Not all programming works well that way. I often have a goal I want to achieve, but no clue exactly how to get there at first. It takes quite a lot of experimentation, iteration and refinement before I have anything worth testing - and I've been programming 40+ years, so it's not because I don't know what I'm doing.


Not every approach works for every problem, still we’re all writing a lot of straightforward code over our careers. I also find longer term projects eventually favor TDD style coding as over time unknown unknowns get filled in.

Your edge case depends on the kind of experimentation you’re doing. I sometimes treat CSS as kind of black magic and just look for the right incantation that happens to work across a bunch of browsers. It’s not efficient, but I’m ok punting because I don’t have the time to become an expert on everything.

On the other hand when looking for an efficient algorithm or optimization I likely to know what kind of results I’m looking for at some stage before creating the relevant code. In such cases tests help clarify what exactly the mysterious code needs to do so in a few hours to weeks later when inspiration hits you haven’t forgotten any relevant details. I might have gone in a wildly different direction, but as long as I consider why each test was made before deleting it the process of drilling down into the details has value.


I don't want to insult you, but I had to re-program myself in order to accept TDD and newer processes and there are a lot of systems out there that weren't written with testability in mind and are very difficult to deal with as a result. You are describing a prototype-until-you-reach-done type of approach, which is how we ended up with so much untestable code. My take is that you do a PoC, then throw it out and write the real application. "Build one to throw away" as Brooks said back in 1975.

I get where you're coming from, because I'm about a decade behind you, but resisting change is not a good look. I feel the same way about all this vibe coding and junk--don't really think it's a good idea, but there it is. Get used to being wrong about everything.


>but resisting change is not a good look

Your condescending attitude is not a good look. You don't know me at all.


It's as matter of practice. The major problem is that business folks don't even know how to produce a testable spec, they just give you some vague idea about what it is they want and you're supposed to produce a PoC and show it to them so they can refine their idea. If you go and produce a bunch of tests based on what they asked for, but no working code, you're getting fired. The whole process is on its head because we don't have solid engineering minds in most roles, we have people with liberal arts degrees faking it until they make it.

There were a few places I worked that TDD actually succeeded because the project was fairly well baked and the requirements that came it could be understood. That was the exception, not the rule.


I am not really sure if TDD often is compatible with modern agile development. It lends well to more waterfall style. Or clearly defined systems.

If you can design fully what your system does before starting it is more reasonable. And often that means going down to level of are inputs and states. Think more of something like control systems for say mobile networks or planes or factory control. You could design whole operation and all states that should happen or could happen before single line of code.


TDD operates at a vastly smaller scale. You don’t write every single test for the entire project before writing a single line of code.

Write some tests for a non trivial function before creating the function and the entire cycle might take as little as 20 minutes.


There is no relationship between agile/waterfall and TDD. Same as there is no relationship to pair programming and agile/waterfall, either.


> The intended workflow of TDD is to write a set of tests before some code. The only reason that makes sense conceptually is to prevent possible future bugs from going undetected.

Again, I don't do that for correctness, I do it because it's faster than not having something to work against, that you can run with one command that tells you "Yup, you did the thing!" or "Nope, not there yet". When I don't do TDD, I'm slower, because I have to manually verify things and sometimes there are regressions.

Catching these things and automating the process is what makes (for me) TDD worth it.

> Put another way if your TDD always pass then there’s no point in writing them

Uuh, no one said this?

I'm not sure where people got the idea that TDD is this very strict "one way and one way only", the core idea is that your work gets easier to do, if it doesn't, then you're doing it wrong, probably following the rules too tightly.

We don't have to be so dogmatic about any methodologies out there, everything has tradeoffs, chose wisely.


>> After all, bugs are unexpected and you can’t write tests for a bug you don’t expect.

Ironically, AI can. In my experience it is extremely good at thinking about edge cases and writing tests to defend against them.


LLMs are notoriously bad at reflecting on how they work and I feel like humans are probably in the same boat


Wow, dragonfly terminated relatively quickly for Claude but sent ChatGPT into an infinite loop that was even worse than seahorse


Yes it turns out the article’s conclusion is in fact contained in the conclusion paragraph


To be fair, with JS's initialisation rules it does feel like it could have been anywhere.


Probably the only overlap between AI slop and actual journalistic writing is the obsession with em-dashes


And hallucinations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: