Hacker News new | past | comments | ask | show | jobs | submit login

I‘m not sure I believe that when just yesterday, I ran a bunch of data analysis, simulation, and visualization based on a single csv and produced 5-10 decent matplotlib plots in a 90 minute back and forth between OpenAI canvas, vscode, python and jupyter. I didn’t believe some of my results and then discovered some problems of the dataset itself, so there was some „real work“ done in those 90mins.

I can say with certainty that I wouldn’t know how to wield matplotlib and pandas with such fluency in an hour, even though I am perfectly able to read the implementation and query for some relevant intermediate results to check my mental model.

Granted this is not the world’s most complex problem, but that is a good example of the domains where these tools are incredibly useful and productive already (I didn’t have to consult the docs even once). So in a way I think of LLMs as very good interfaces to the docs :)

I often feel that the UI aspect of new technologies is underappreciated. All our computers (even grep) are turing complete. This means software engineering is fundamentally a discipline of building better user interfaces that allow us to do whatever we want more easily.

I am always curious how other people experience these things as so useless :)




Most of my work is data analysis, and it's the area I find LLMs are most difficult to use, and potentially dangerous!

I really worry when I think of how less experienced colleagues and Excel power-users might approach a pandas/matplotlib workflow with an LLM.

Very normal example from the other week. I've got a big csv of housing data, poking around I spot a bunch of very weird things in the prices: cheap places where I'd expect high prices, a strange bicameral distribution. I spend a few hours thinking about it before taking it up with the colleague who'd provided the data. There's a bug in the scraper that's messed up a bunch of values!

It'd be super-easy to naively run a simple LLM-driven analysis and miss this sort of thing.

When you're "building things" with AI, feedback is immediate. You press a button, it doesn't work as expected, the charts you can quickly run up don't give the same signals.


I worry about the same thing. Even with traditional tools people mess it up all the time. It's in the same way too, treating your algorithms like black boxes. To really do data analysis you have to understand your data and understand what you're algorithms are doing/what they mean. Too often people just naively plug in numbers and take a result. Unfortunately it's not that easy. I work "AI" just accelerates this problem and brings it to more domains


It's hard to judge without knowing. Yes, when something is rather routine (i.e. there are many examples, especially in similar formats) LLMs will almost always write this code with little to no issues. But outside that things get far harder to evaluate, especially without seeing it.

  > I often feel that the UI aspect of new technologies is underappreciated.
I do too. But I should also mention that the human language interface is not a great UI. I mean how often do you miscommunication with your boss?

  > other people experience these things as so useless
Who said useless?


The one use case I’ve found LLMs excel at is using a new library. Even if it gets a lot of the API wrong, figuring out most of the setup / boilerplate is useful.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: