Hacker News new | past | comments | ask | show | jobs | submit login

My experience is sometimes it works amazingly well like you say, and sometimes it doesn't.

LLMs are great at translations, like "convert this SQL to JooQ" or "convert this Chinese text to English". If you were going from Haskell -> Rust it might have trouble fighting with the borrow checker, or it might not if your memory usage patterns were straightforward or you can "RC all the things". Haskell's a particularly good target because less state means less complexity thinking about state.






FWIW even with machine translations (on GPT-4o) I do get occasional hallucinations. Like, it'll add an additional sentence or relative clause that wasn't in the original. It'll still make sense in context, so the overall meaning is the same, but it's not what I'd expect from, say, a human translator. (For context, I know enough of both languages—English and Japanese—to be able to verify this myself.)

It doesn't happen very often—maybe 1 out of 50 times?—but it's definitely a somewhat regular occurrence. And ofc, it's not just a result of the fact that every language is different and so the same meaning will need to be expressed differently—it's actually hallucinating bona fide content.


Oh, and IME it doesn't ask clarifying questions that a human translator would know are necessary (at least without explicit prompting to do so, which I admittedly haven't tried). So when translating from a high-context language like Japanese, it'll come up with a translation that works in some contexts but is meaningless in others. I figure this is related to the overconfidence issue that's prevalent with (current) LLMs.

So very tempting to put it to the task. Any takers? Try converting https://hackage.haskell.org/package/megaparsec to rust.

Just tried this using similar prompting to the original and it required several rounds of telling it about compiler errors before generating something that looked suspiciously like a _very_ basic version of "nom", without any of the state/position tracking or the nice error model in megaparsec. I suspect anyone talking about instances where doing something like this "just works" is a grifter/aspiring AI influencer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: