Hacker Newsnew | past | comments | ask | show | jobs | submit | lowbloodsugar's commentslogin

New here? Ideas are fucking cheap. “I have an amazing idea!” Let me guess, you just need a coder, any coder will do they’re all the same, to turn your amazing idea into a product, and you’ll give me a couple of bucks to do it. lol.

That’s entirely on you. You can take the time to understand it before moving on to the next task. I say this with sympathy and understanding.

Worse: it’s the confident prolific idiot, the most dangerous kind.

Humans aren’t any better. That’s why we have OSHA etc. I think you’re hoping for a formal logic based AI and I’ll wager no such thing will ever exist - and if it do, it would try to kill us all.

> Humans aren’t any better

We're different.

People have fairly consistent faults. LLMs are nondeterministic even in terms of how they fail. A high value human resource can be counted on to deliver. That, imho, is in fact one of the primary roles of good management: putting the right person in the appropriate position.

Process engineering has worked to date because both the human and mechanical components of a system fail in predictable ways and we can try to remedy that. This is the golden bug of the current crop of "AI".


> A high value human resource can be counted on to deliver.

Anyone who has encountered politics, psychopaths and narcissists knows that this isn’t always true.


Normally, people don't suddenly go insane, snap and start deliberately deleting things in production. Sure, it happens, but very, very rarely.

People make bad decisions all the time. Insanity is not required. My remark was pointing out the larger failure mode of people acting contrary to the good of the team for personal gain, eg creating a problem and blaming someone else to reduce their chances of competing for a promotion. But to your point, an SDE doesn’t need to be insane to bypass a 2 PR and force a change into production. They just need to be panicked, or overconfident, or overworked.

> They just need to be panicked, or overconfident, or overworked.

One of the best thing about digital computers, compared to humans, was that they can't be the first or the third thing you mentioned; unfortunately, they absolutely are the second ("the machine does exactly what you told it to do, not what you want it to do"), and at inhuman speeds. Presumably, AI would (need, actually — Nick Bostrom puts a fairly reasonable argument for that in his "Superintelligence") fix that second bullet point, and then everything will be peachy.

Instead, we have people on the internet arguing that it's not a problem, since people too have this same problem. Which is a problem. But not a problem. Ugh.


Computers absolutely can be overworked. Plenty of outages caused by system overloads. Or a system deletes a file because it believes it to be no longer in use but only because some queue was full. I’m not arguing that it’s not a problem because humans have the same problem. Part of my job is making sure humans can’t fuck it up either. I’m saying “assume the worst” and make sure the processes catch human and AI mistakes.

Also, I think Nick makes the same point as me: AI will attempt to kill us.


Formal logic AI systems have existed and were popular in the 1980s. One of the problems is that they don't work - in the real world there are no firm facts, everything is squishy, and when you try to build a large system you end up making tons of exceptions for special cases until it becomes completely untenable.

Non-deterministic systems that work probabilistically are just superior in function to that, even if it makes us all deeply uncomfortable.


I don't know what definition of AI you're using, but plenty of ML algorithms operate deterministically, let alone most other logic programmed into a computer. I don't see how your statement can be right given that these other software systems also operate in the real world.

ML run a GPU that uses matrix multiplies isn't deterministic unless you go through great pains to lock things down at the expense of performance.

Actually they do very well at medical diagnosis but the doctors union banned them.

Yeah, I remember when the lazy bastards started writing programs using compilers instead of learning assembly language. Now I don’t have a single colleague who can write assembly. There’s whole generations now who can’t code assembly. Most don’t even know what a register is. Hope Zig holds against this latest attempt to make everyone stupid.

To add to the other commenters, loads of people don’t know assembly, which speaks to the quality of the average developer. The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.

I'd be very surprised if the "average" developer across the board was in fact not just a JavaScript / TypeScript only developer. I have no expectations or really even hope that the average developer I work with has ever written a line of assembly.

>The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.

That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.


Knowing assembly in this day and age is the result of being curious and wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.

The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.

Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.


> wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.

It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.


Knowing assembly doesn’t mean you would spend your time writing assembly (aka being familiar with opcodes and architecture optimizations). But in the process, you get familiar with the working of the computer hardware and the OS that sits on top of it. That is always useful knowledge especially when needing to deal with binary format and protocols or FFI.

> But in the process..

Then it's sufficient to know assembly, but not necessary.

This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".


Your analogy falls apart because the "lazy bastards" still knew how to program and understood the code they were working on.

Vide-coders often don't read, let alone understand, the code they send for PRs.


I don't think most JavaScript devs know how to read C code, let alone assembly, so I think the comparison is apt. Is it not?

The JavaScript developers are checking in JavaScript code that they ostensibly understand. That is not the same as prompting an LLM to generate Zig that they don't understand, and expecting someone to merge it.

ah, i see what you're saying. fair point! though the argument was that LLMs essentially are a yet higher level programming language (or, rather, let you write in a higher level language).

They do let you write in a higher-level language, but it's not really analogous to a higher-level programming language. The ambiguity and lack of determinism makes prompting fundamentally different from using a high level programming language.

Generating AI code/PR is not the same as using compilers because of at least two things:

- the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler

- the mental model of what is being generated and how much the contributor understands and owns the generated code


Using an LLM isn't analogous to using a higher level language.

That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.

I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.

High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.


> That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic.

So it is not, by your own admission, "exactly, literally the same".


Take it gently, the poor thing doesn't understand the difference between code and talking about code.

The main difference is that the input to an LLM is in an ambiguous language.

A programming language is allowed to be ambiguous, I don’t know of a definition that excludes that!

All programming languages I know of provide at least some guarantees about the program’s behavior.

The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.

So is JavaScript haha.

So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?

They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.

Why does it matter if a human or a machine is responsible for turning the prompt into code?

If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.


Sorry but that’s a childish take.

Would you mind explaining why?

There’s a big difference between (mostly) deterministic compiler and non-deterministic LLMs.

Computers have been better at this since the 80s. But the doctors have a really good union, and they’re smart enough not to call it a “union” so it sounds like it’s about standards and ethics.

They should both be in sandboxes. Just different “sized” boxes.

The problem is that I don’t send them anything. So it’s “we can use whatever of yours that the application we wrote sends to us”.

Same can be said of many humans.

Say what you will about humans, but they are absolutely capable of working autonomously. That’s what jobs are.

Uh uh. Thats why there are no jobs called “supervisor” or “manager” right?

Because there’s nothing else. We don’t make anything. We produce value. All that’s left is to harvest the wealth of boomers. America is trying to stay alive by drinking its own piss.

Bruh what are you even talking about? Trying to save the lives of old people is not about harvesting wealth. What the hell?

Not the OP but I get it. We don’t produce anymore.

What we do is produce ideas, then sell the idea to a few wealthy groups; which has lead to a very distorted economy.

It’s also no secret that “wealth extraction” has been an ongoing best practice for the past decade or two by those in this circle, and the financiers are eyeing ways to get to the retirement accounts legally.

We already see this with cryptocurrency “normalizing” as investments and SpaceX bundling itself with questionable AI companies pre-IPO (index fund manipulation).


Trying to save the lives of old people is not what the healthcare industry does.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: