Hacker Newsnew | past | comments | ask | show | jobs | submit | iotku's commentslogin

Location: Seattle, WA

Remote: OK

Willing to relocate: Maybe

Technologies: Java, Golang, RESTful APIs, Astro, React, Typescript, AWS (EC2, S3, RDS, Lambda), Linux, Bash, Git, Containers

Résumé/CV: https://twoku.com/Abrams_Jonathan_Resume.pdf

Email: jnthnab@gmail.com

I'm CS new grad looking for entry to mid level positions to gain industry experience.

Full-stack experience with building backend APIs in Java and Golang, and frontend interfaces with Astro and React. I'm a Linux enthusiast with a fair bit of systems administration knowledge as well.

Familiar with dealing with VoIP/messaging platforms such as Discord and Mumble.

Willing and able to adapt to new technologies!



From a marketing perspective perhaps, but it's still a supported LTS release of Ubuntu at heart and having two different version numbers would create ambiguity.

Things that should work on that particular Ubuntu LTS should work in Pop_OS! And at least you don't have to cross reference things.

Thankfully they keep important things more up to date with newer kernels/hardware support than the version numbers would suggest, but I think that it's a common point of confusion.


I consider myself pretty technically literate, and not the worst at programming (though certainly far from the very best). Even so I can spend plenty of time arguing with LLMs which will give me plausible looking but extremely broken answers to some of the programming problems.

In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)


Even with code, "seeing" a block of code working isn't a guarantee there's not a subtle bug that will expose itself in a week, in a month, in a year under the right conditions.


I've pointed this out a lot and I often get replies along the lines of "people make mistakes too". While this is true, LLMs lack institutional memory leading to decisions. Even good reasoning models can't reliably tell you why they wrote some code they did when asked to review it. They can't even reliably run tests since they'll hardcode passing values for tests.

The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.

The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.


The Nvidia CUDA repos are still on Debian 12 as well which was a blocker for me. (Some claim it works fine anyways, but not in my experience.)

It's not like the Debian release schedule is a secret, I suspect there's just less corporate pressure to prioritize Debian.


NVidia bookworm repo worked fine on all my machines. What did not work for you? I deduced there wasn't really anything Debian-12 specific in there (it's still a Linux kernel with SystemD).


Just never managed to load the Nvidia module properly and fell back to the open drivers which don't work on my system. Didn't really feel like investigating further because the whole point of using Debian was going to be to setup and forget about it.


Did you try the NVidia driver from the Debian repo and/or NVidia themselves? The former would not load on my Optimus machines either, without any clue as to why. (I sunk more time than I should've into this. I literally tried everything on the wiki pages to discover they are horribly outdated.) I totally agree: this is not great. I know people will blame me for choosing NVidia, or NVidia, but why can Fedora, Ubuntu et al do this right?


Debian repo proper worked fine, but I hit a problem with an application that wasn't playing nice (Zed would just stop accepting text input...) with X11/Gnome and Wayland is a bad time on the 535/550 Nvidia drivers.

Realistically I can live on X11 outside of my dual monitor setup and that one application, but things get very choppy with mixed refresh rates. Still not the biggest fan of Gnome, but if I'm using deskflow only Gnome or KDE support the input sharing portal on Wayland.


The NVidia repo gives you 580, perhaps it is of help (I assume Wayland support is getting better all the time, if ever so slightly). I use KDE, so maybe that happened to contribute to a different experience.


The server browser is deliberately hidden, neglected, and full of spam on the modern versions of CS.

It's no longer the "default" way to play, and only a select amount of people get around to using it. Despite a much larger playerbase, there is far less activity than there was in the past.

There's still community servers out there (and niche communities like surf and bhop when still possible), but they're only still around for legacy reasons. If there wasn't any lineage there it would have been removed entirely in GO.


They're prioritizing correctness to the spec over speed and are still 'officially' in pre-alpha. It's still to be determined how well they can bridge the gap there.

For casual web browsing it's plenty fast enough already to do a lot of things, but they're a relatively small team fighting against decades of optimization.


I'm sure in larger codebases it can get unwieldy with tons of TODOs from a lot of different people, but for personal projects I've always found them a good compromise.

For me it's saying "yeah I know it could be better but I'm not going to break my train of thought over this and context switch. It's not so critical as to break functionality, this would just be nicer."

I really do appreciate TODO hilighting in editors for the odd occasion where I get back to something on a whim and feel like doing a quick fix then. (It's probably not realistically that common though and most will sit there indefinitely)


I really appreciate the feature of Jetbrains IDEs whereby the codebase is indexed for TODO comments.

I often find myself with some time on a plane and cracking open my laptop to dig through to the TODOs that are shown is really cathartic.


I do love the joke, but it is worth remembering as well that all of those S were to a certain extent afterthoughts to fix otherwise insecure protocols.

Given how old FTP and HTTP are it's fairly understandable that they weren't initially designed with security in mind, but I think it's valid to question why we're still designing insecure systems in 2025.


Totally agree, If we have made a mistakes in past we must have learnt from it and when designing a standard specially with AI where the outcome is non deterministic we got be more careful.


That's quite the point of the joke. Even today, we still design things that will need an S tacked onto it at some point in the future.


There's a pretty big gap between "make it work" and "make it good".

I've found with LLMs I can usually convince them to get me at least something that mostly works, but each step compounds with excessive amounts of extra code, extraneous comments ("This loop goes through each..."), and redundant functions.

In the short term it feels good to achieve something 'quickly', but there's a lot of debt associated with running a random number generator on your codebase.


In my opinion, the difference between good code and code that simply works (sometimes barely); is that good code will still work (or error out gracefully) when the state and the inputs are not as expected.

Good programs are written by people who anticipate what might go wrong. If the document says 'don't do X'; they know a tester is likely to try X because a user will eventually do it.


I feel like you're talking about programs here rather than code. A program that behaves well is not necessarily built with good code.

I can see an LLM producing a good program with terrible code that's hard to grok and adjust.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: