Hacker News new | past | comments | ask | show | jobs | submit | bwfan123's comments login

We've come back full-circle to precise and "narrow interfaces".

Long story short, it is great when humans interact with LLMs for imprecise queries, because, we can ascribe meaning to LLM output. But for precise queries, the human, or the LLM needs to speak a narrow interface to another machine.

Precision requires formalism, as what we mean by precise involves symbolism and operational definition. Where the genius of the human brain lies (and which is not yet captured in LLMs) is the insight and understanding of what it means to precisely model a world via symbolism - ie, the place where symbolism originates. As an example, humans operationally and precisely model the shared experience of "space" using the symbolism and theory of euclidean geometry.


At one time we had to switch our service out of DB2 into psql. We had to do this with minimum downtime !! ORM helped us since it abstracted the DB (to the orm the sql dialect was a plugin). The application was largely untouched. Also, not all developers can roll their own sqls and sqls embedded in code make it harder to refactor or reason with and eventually sqls will be abstracted into a crud library of sorts anyway.

> Also, not all developers can roll their own sqls

If you can't write SQL, don't use an RDBMS until you learn it. This sounds like gatekeeping because it is: I don't understand why so many people are willing to accept that they need to know their main language to use it, but not that they need to know SQL to use it.


Same, The author writes like Dave Barry. I burst out laughing more than once. He was able to articulate with a lot of humor exactly what I think of co-pilot.

flipping your argument:

It is difficult for ceo/management to understand that the ai tools dont work when their salary depends on them working since they have invested billions into it.


What do you call a code change created by co-pilot ?

A Bull Request


It happens in waves. For a period, there was an oversupply of cs engineers, and now, the supply will shrink. On top of this, the BS put out by AI code will require experienced engineers to fix.

So, for experienced engineers, I see a great future fixing the shit show that is AI-code.


Each time that I arrive at a new job, I take some time to poke around at the various software projects. If the state of the union is awful, I always think: "Great: No where to go but up." If the state of the union is excellent, I think: "Uh oh. I will probably drag down the average here and make it a little bit worse, because I am an average software dev."

So many little scripts are spawned and they are all shit for production. I stopped reviewing them, pray to the omnissiah now and spray incense into our server room to placate the machine gods.

Because that shit makes you insane as well.


nice expose of human biases involved, need more of these to balance the hype.

1) Instead of identifying a problem and then trying to find a solution, we start by assuming that AI will be the solution and then looking for problems to solve.

hammer in search of a nail

2) nearly complete non-publication of negative results

survivorship (and confirmation bias)

3) same people who evaluate AI models also benefit from those evaluations

power of incentives (and conflicts therein)

4) ai bandwagon effect, and fear of missing out

social-proof


I have implemented database schemas, (without knowing database theory), and these principles are a revelation to me.

A question I have is: Given a schema, are there automated verifiers for validating that it adheres to these principles ? A schema "linter" of sorts.

There seem to be parallels to linear algebra here (orthogonal bases, decompositions, etc)


Intead of "Step 1: Upload Your Diagram", if you could say "describe your diagram", that would take it to the next level. So you would generate a workflow from the description using an llm which the user could then edit. This takes out the painful part of diagram creation and makes it fun for the user. For example, for infra automation a description could be: "first download this from this url. then, run this command, then take the output and feed it to this, then reboot, email me if there is any problem".

Great idea and a work in progress :)

To make it less ambiguous, you could let the user describe the diagram with what the software is supposed to do in some sort of more rigid unambiguous reduced English /s

This would have the added benefit of being able to describe to the computer exactly what you want to happen and how.

Feels like we're onto something here.


And them we can just use some process to turn those words into machine language. Maybe we can call it a compiler or an intepreter?

This [1] is an old and brilliant article titled "On the foolishness of natural language programming" by Dijkstra relevant to this debate.

The argument is that the precision allowed by formal languages for programming, math etc were the key enabler for all of the progress made in information processing.

ie, Vibe-coding with LLMs will make coding into a black-art known only to the shamans who can prompt well.

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: