Agree on mise. It's a great tool, really well implemented and easy to use. I've been trying to set up hk[0] this week and it's unfortunately not been as smooth a ride though.
that's fair. The DX of hk is a much harder problem since it will always require a decent amount of customization to fit into a project. I will be improving this though.
I'd probably say hk is the most challenging pre-commit manager to setup compared to its peers. That said, it's also the only one that can run hooks in parallel safely and deal with partially staged files where the others don't bother with these problems.
At least right now hk is good for folks that want the fastest and don't mind a bit of effort. Hopefully I can improve that and make it the best all-around.
Im very open to a bit of a learning curve! I wasn’t able to get a pre commit of ‘tofu fmt -check’ with the list of tf files changed working, which was frustrating! I found working with pkl tough as there’s little/no editor support (compared to writing tasks in toml with mise). I tried adding a post install hook to mise to run hk install which had surprising side effects!
Yeah, I found the import of existing pre-commit config wasn't very useful. I just switched to using prek as a much faster drop-in replacement for pre-commit https://github.com/j178/prek. Really like mise though, and just started using fnox yesterday.
Mind if I ask what trouble you've had setting up hk? I've been using it a while now and I love it almost as much as I love mise. Took me a little while to get my head around pkl (and if I'm honest, I'm very much still winging it) but otherwise it's been a joy to use.
No support for opentofu, so I had to write a custom hook for tofu instead of terraform. Then the hook itself didn’t work because tofu fmt didn’t like the full list of files being passed on instead of just the tf files. Then I had an issue with tflint. It wasn’t clear that hk would install in the current directory and not the git repo. Writing pkl was awkward - vscode has no support.
Our use case is a dotnet project with infra defined in terraform. Dotnet fmt is too slow to run as a pre commit hook so I wanted to try tflint and tofu fmt as I know they are very quick and they are relatively easy to work with.
They both accept a list of files to work on, but the filter on hk gives you a full list of files that changed, so if a cs file and a tf file changes, both steps will fire with both the cs and the tf file
I think a small improvement might be adding a matched_files template sub that would only show the files that matched the glob rule. I also think an LSP integration for VSCode would go a long way. I could manage the first but the second might be pushing my limits
We’ve built a solution in conjunction with a university to this problem that is pretty low effort to implement, but very few professors can be bothered to even try it out (the apathy and red tape is unreal). Honestly, it has been disheartening that distribution is so tough, as the results have been great for those who are using it.
As I understand it, it records when you're writing, including all edits and such, and verifies it's human based on that. Well, see the demo at https://inktrail.co/in-action
It will probably work well right now, but I don't know how easy that would be to fool once the hucksters build tools to circumvent it.
> very few professors can be bothered to even try it out
Do you happen to know if there is some overlap between these professors and professors who also refuse AI-detectors? Apathy could be the reason but I wonder how much of it is driven by cynicism encouraged by the inefficacy of AI-detecting tools.
Well that's terrible news. Currently building a product for the same market (completely different solution if the nephew comment is correct about your product). I'm already not thrilled to be in ed-tech selling to instructors with admin's money. I thought at least the instructors would have some enthusiasm to solve the problem. Shit.
As a professor would be interested in a solution to this problem, I'd be curious to see hear more details. FERPA issues can sometimes make it difficult to adopt solutions where student info of any kind needs to be sent to a third party.
I used these in a distributed sync in about 2013-2014. When Postgres got JSON support we ended up going centralized for that product and a simple logic clock was all we needed. Nevertheless, I still think they’re very cool.
We built a simple but novel solution for that is far more reliable and works completely differently to gpt-zero and openai methods. I'm not posting a link as we're not ready for HN hug of death, but please PM if interested.
The saddest thing is that this project has been once of the most demoralizing I've ever taken on. Day to day we see so many students being failed by teachers and school leadership who care more about "adapting to AI" than real student outcomes today.
In practice, we've found teachers don't generally want to have the difficult conversations with students when the hard evidence of cheating is given to them.
And generally school/university/college leadership have no real tactics to implement their "AI strategy" other than train their own chat bots (wtf) and "adapting assessment to use AI".
Unfortunately, a simple non AI fix to the problem is definitely not as good for their careers.
IMHO without a change, we're creating a pretty bleak future for students of the next few years.
The problem is that a lot of students were already cheating, and there's no real benefit to teachers trying to punish the students for it.
The schools don't want to do it, because letting one or two students by is, to them, much better than {the public backlash of someone admitting to cheating, the loss of money from kicking paying college students out, ...}, and the teachers don't want to do it because the schools are going to tell them to shut up and stop making noise about it, and it might poison their career to not listen. Of course, if you do that, then they will keep doing it, and you'll get people graduating who poison your reputation eventually, but it turns out to take a while before that happens, if ever...
At least when I was in college, it was pretty obvious to the students which students Did Not Do The Work(tm), and it was eye-opening how bad it could get when you took the "for non-majors" versions of various classes.
ChatGPT et al are just bringing to the public eye how bad this already was by lowering the bar so much there's no barrier to entry, no "if you know you know" whisper network of people cheating for each other, no wink-and-a-nod "tutoring" service being paid for, just...press button receive paper.